Featured

VMware VMworld 2020 Shirt Gift

Today morning I’ve received a shirt from VMware after participating in VMwolrd 2020 Live sessions and Labs ,in fact this year VMware has opened this World Big event to all people interested to the data center virtualization and Cloud technologies from all the world because of the pandemie 😦 🙂

Before I have received a PIN Badge from VMworld to add it in my Linkedin Profle and certification

Thank you VMware hope to attend the next VMworld event on site 😉

LAB VMware Horizon view VDI LAB (PART 2)

In Part 1 I’have introduced the Horizon VDI (Virtual Desktop Infrastructure ) with the Installation of different servers and roles, in this part 2 I will continue the setup and configuration of the Horizon server and preparing the Desktop Pools and personalization of the master Windows 10 Image.

Configuring Horizon 7

Below are in brief steeps needed in order to configure horizon server for the First time

To use vCenter Server with Horizon 7, we must configure a user account with appropriate vCenter Server privileges. We also must create a user account in AD that Horizon 7 can use to authenticate to the View Composer service on the standalone machine

  • Login with the service account or the admin account details given during installation.
  • Adding Horizon Admin Groupe:  this Groupe should be created before on the AD
  • select the permissions required and click next
  • Licensing Horizon settings – product licensing and usage – edit license and paste the license Code
  • Adding vCenter and connection server and Linked to the Horizon Composer, provide the below the AD account crated before to  Authenticate  With horizon

 Standalone composer and provide the composer derver details

Creating Linked Clone Desktop Pool ( Composer)

Note that in Horizon we can create two types of pools:

Linked clone which uses composer to provision VDI’s which saves the storage.

Instant clone need Horizon Enterprise licenses and uses only connection server, no composer required and its saves storage.

Horizon Agent installation on Windows 10

Windows 10 machine preparation:

  • Create a windows 10 master image.
  • Use DHCP not static ip
  • Join machine to domain
  • Install all required app
  • Install and configure the  the Horizon  Agent

Shutdown the VM after reboot and take snapshot ( this snapshot wil be used later during the creation of Desktoop Pools)

  Linked Clone Desktop Pool Creation

As master image is ready in previous setup will create the linked clone desktop pool and add users

Provide VDI naming convention that will used in VM  name and inside OS Naming

Select master image – snapshot , folder , resource pool and datastore details. This sould be  the Windows 10 Image and the snapshot created in the last section

select the domain and the OU for creating VDI machines name

Now after created  the Desktop pool we should add the Entitlements users to use it

NeXt I will describe the connectiont he the Virtual Desktop USing the VMware Unified Gateways

Datacenter cluster with 02 VSAN Node (ROBO) deployment (PART 2)

In the PART 01 I’ve introduced my LAB environment and the pre requirements that i prepared fro my VSAN 02 Nodes Cluster , in this Part i will explain in detail the configuration of the LAB

  1. Adding the Witness Host

As explained before the Witness Host is a crucial component in any VSAN 02 Nodes and configuration is similar to Witness role in a stretched 02 Sites Cluster , the Witness appliance could be deployed as an OVA from VMware or could be deployed as nested ESXi host and setup the necessary configuration accordingly (NICs, VMKernel, vsitch, Local HDD + SSD disk )

In this LAB I used the witness OVA model on separate ESXi Hosts outside my VSAN vCenter

Here are the step to ADD the Witness as standalone host to the vCenter VSAN data center

Checking the Witness Host Storage configuration

Checking the Witness hosts NEtwork configuration

2. Configuring the VSAN Cluster

Navigate to the cluster in the vSphere Client. Click the Configure tab.

Under vSAN, click Services. Click the Configure tab.

Select the vSAN Witness host to act as the vSAN Witness for the 2 Node vSAN Cluster.

When selection the Witness host VASN will ask to setup fault domain configuration and setupe the selected Preferred Host and Secondary Host

Claim storage devices on the witness host and click NEXT.

Select one flash device for the cache tier, and one or more devices for the capacity tier

3. Configuring Disk Groups

As explained in part 01 My Cisco UCS MX5 servers has a mixed of HDD and SSD Disks So I decided to reorganize them in order to have identical Disk groupe for both Servers

Disk Goup 01 (Hybrid) : 01 Cach Disk 894 GO SSD + 5 x 1.64 TO HDD Disk : Total Capacity : 8.2 TO

Disk Group 02 (Falsh) : 01 Cach Disk 894 GO SSD + 4 x 894 Go SSD Disk : Total Capacity : 3.4 TO

in resume we will got a Total of 23.38 TO storage capacity and effective capacity of 11.7 TO

3. Network Post check Poste Check

One of the Most important preCheck to do before enter the cluster in producton is to ensure that the Network connectivity between the 02 Nodes Cluster and the witness Host passe without fails

END.

Datacenter cluster with 02 VSAN Node (ROBO) deployment (PART 1)

Preamble

VMware has HCI Solution called VSAN that was introduced  since  vSphere  6.0 and was one of the most important gap in  storage evolution in the last 05 years , one of the Most condition to deploy a VSAN cluster is to have at least  03  Node with the same configuration  that can be scaled up to 64 Nodes.

Later and since version vSphere 6.2 VMware has announced  2-Node vSAN cluster, that  is a great choice for remote/Small office like my case😊 . Setting up the 2 nodes in a direct connect configuration can be beneficial if the remote site has limited switch port availability, or no 10GB switching available

Setting up my VSAN  Lab

Recently we have recovered 02 CISCO UCS Servers from one of one Makor Telecom client here in Algeria, theses Servers has a couple of HDD, SSD SAS  disks and  I was thinking about to exploit  themes

unfortunately   the hosts are from two different models C220 and C240  and with different Disk count and models  HDD + Flash   but  are from the same Cisco 05 Generation , Hence  I go an idea to rearrange the disks inside the Hosts  in order to Meet   one VMware VSAN conditions is the each host must have the same disk configuration  and Disk count.

Server’s Specifications and  disk repartition

Cisco UCS C220 :

Internal: 02 x 270 GO SDD Disk   , Contain the ESXI installation and local Data store   (RAID 0)

Disk Group 1 :  (Hybrid ) 01 x  500 GO SSD (Cache) + 05 1.6 TO HDD (Capacity)

Disk Group 2  : (Flash)  05 x  500 GO SSD DISK (Capacity)

Cisco UCS C240 MX5:

Internal: 02 x 270 GO SDD Disk   , Contain the ESXI installation and local Data store   (RAID 0)

Disk Group 1 :  (Hybrid ) 01x 500 GO SSD (Cache) + 05 1.6 TO HDD (Capacity)

Disk Group 2  : (Flash)  05 x  500 GO SSD DISK (Capacity)

Architecture of 2-Node vSAN cluster

As I mentioned above in a VSAN 02 Node cluster  wa avoids   the need of expensive 10Gbs Network switch, that  is primordial  in traditional  VSAN Configuration , the two host could be connected through a crossover Network Cable to achieve 10 GBs/ Network bandwidth required for a Storage operation

This diagram is the architecture of 2-Node vSAN cluster

In total we will need, 08 x IP addresses are required for this vSphere cluster

  1. 2 x routable management IP/VLAN per ESXi node  (Blue Linge)
  2. 2 x non-routable vSAN/VLAN IP per ESXi node  (Pulpe Line)
  3. 2x non routable vMotion/VLAN IP per ESXi node  
  4. 1 x routable management IP for witness Node (Witness site VLAN)
  5. 1 x routable Witness VMKernel IP for witness Node  ( Green Line)

The configuration and the setup of this VMKernel protogroups also the vCenter pre requises is not descried in this Document since they are basic task for any vSphere deployment

Witness Host:

Witness terminology is issued from the clustering world . If we take Microsoft Clustering (MSCS) for example, and we take a simple two node cluster, this also has the concept of a quorum or witness disk. When one node fails, or there is a split-brain scenario where the nodes continue to run but can no longer communicate to each other, the remaining node in the cluster reserve the witness disk. Therefore it  select the  quorum, and either continues to provide the clustered service, or takes over the running of the service from the node which failed.

Witnesses play an import role in ensuring that more than 50% of the components of an object remain available, If less than 50% of the components of an object are available across all the nodes in a VSAN cluster, that object will no longer be accessible on a VSAN datastore.

The Witness could be configured and deployed as p OVA appliance or as ESXI hosts in this lab  I used the 2nd method and  configured a nested ESXi  with proper protogroups and Local disks (01 HD +1 SDD)

VSAN Network consideration

Static Route

On the above  architecture diagram and  during the setup of my lab I encored a connectivity issue between my 02 Node and the Witness Host, in fact the VSAN network on the witness appliance ca’t reach the VSAN network(s) of the data sites (and vice-versa). In ESXi, there is only one default gateway, typically associated with the management network. The storage networks, including the VSAN traffic networks, are normally isolated from the management network, meaning that there is no route to the VSAN network via the default gateway of the management network.

The solution is to use static routes in the current version of VSAN. Add a static route on each of the ESXi hosts on each data site to reach the VSAN network on the witness site. Similarly add static routes on the witness host so that it can reach the VSAN network(s) on the data sites. Now when the ESXi hosts on the data sites need to reach the VSAN network on the witness host, it will not use the default gateway, but whatever path has been configured on the static route. The same will be true when the witness needs to reach the data sites.

Witness traffic Separation WTS,

this is very important future to set up in 2 Node VSAN cluster it is an alternate VMkernel for traffic destined for the Witness host from a directly connected vSAN tagged VMkernel. It is important to note that you cannot tag witness traffic via the web client. You must open a SSH session to each of the data nodes in the cluster and configure the witness tag from the command line. This does not get configured on the Witness host; only on the data nodes. You can use a VMkernel that is created specifically for this purpose or use a pre-existing VMkernel. Either way, running the following command will configure tagged witness traffic on the VMkernel:

[esxcli vsan network ip add -i vmk0 -T=witness]

in the end and to confirm that the witness traffic is tagged on Both Hst check

In part 02 We will detail step by step how to configure the VSAN 02 Node Cluster

KEMP Load Balancer

Today is my first labs on deploying KEMP Load balancer in order to prepare one lab for new Exchange deployment ,

1 Introduction

The Kemp Virtual LoadMaster is a version of the Kemp LoadMaster that runs as a virtual machine within a hypervisor and can provide all the features and functions of a hardware-based LoadMaster.

This document describes the installation of the Virtual LoadMaster (VLM) within a VMware hypervisor environment.

Installation_Guide-VMWare_ESX_ESXi_and_vSphere_1.gif

The Virtual LoadMaster is VMware ready. Starting with LoadMaster Operating System (LMOS) version 7.2.50:

  • The VMware VLM is delivered as a hardware version 10 virtual machine. You can upgrade to a higher virtual machine number as needed.
  • Virtual LoadMaster is supported with:
    • VMware ESXi 5.5 and above
    • vCenter Server 5.5 and above

There are several different versions of the VLM available. Full details of the currently supported versions are available on our website: www.kemptechnologies.com.

The VMware virtual machine guest environment for the VLM, at minimum, must include:

  • 2 x virtual CPUs (reserve 2 GHz)
  • 2 GB RAM
  • 16 GB disk space (sparse where possible)

There may be maximum configuration limits imposed by VMware such as maximum RAM per VM, Virtual NICs per VM and so on. For further details regarding the configuration limits imposed by VMware, please refer to the relevant VMware documentation.

for more details please continue on this link

https://support.kemptechnologies.com/hc/en-us/articles/203123629

LAB VMware Horizon view VDI LAB (PART 1)

  1. Introduction

VMware Horizon view is the leading VDI Solution using for Desktop virtualization, in this Blog post I will detail how to deploy a VMware Horizon Environment on a VMware vSAN cluster using Windows 10 template as master image  for Desktops VMs As a VMware production Horizon view is integrated with VMware vCenter/vSphere infrastructure so in this Lab we suppose that the vSphere infrastructure is  already configured Also since horizon view  used to provide  for MS Windows workload Desktop machines , other  Windows infrastructure components should be  ready like Active directory , DNS DHCP Servers

2. VMware Horizon Infrastructure

The Horizon View infrastructure consist of the following Servers
  1. VMware Horizon Connection Server
  2. VMware Horizon Security Server
  3. VMware Horizon Data Base Server
  4. VMware Horizon Composer
  5. VMware Horizon Unified gateway
Other roles  be needed for more advanced configuration   like integration with VMware Workspace One

3. Prerequisite for VMware horizon

  • vCenter 6.7 server with Cluster of ESXi hosts for VDI.
  • Create 05 Windows server 2019/2016 for Horizon view (Connection server Master + Replica, Security Server, Data Base server and  Composer Server )
  • All machines renamed and joined to domain
  • vCenter server reachable form Composer and connection servers ( fqdn of vCenter)
  • Service account to install Horizon Connection server and composer (Adminhorizon)
  • SQL sa account to have access to Database for composer DN owner permissions.
  • Login credentials of vCenter server with admin access.
  • OU for VDI Creation and delegated permissions on domain to service account to create & Delete computer objects.
  • Create one windows 10 VM for Linked Clone VDI master image on vCenter server
  • Create another windows 10 VM for instant Clone VDI master image on vCenter
  • DHCP scope for Windows 10 VDI tested and working.

4. VMWare Horizon Connection server installation

Login to the connection server with service account –run the connection server
Select the Installation Folder
Select the Installation Option
Enter the Server FQDN name
Provide the recovery password
provide the service account and the horizon admin groups

5. Installing second horizon connection server

Second server installation steps are very similar to thes primary except below two screens. Login to second server and run exe as admin – Select Replica Server in the Intaltion option
Enter the FQDN of the Primary Horizon Connection Server

6. Installation the Security Server

Security Server optional since it is only used for remote user using their personnel laptop/desktop through VMware Horizon client and in DMZ  Zone

7. Installing and configuring Data base Server

After Installing a data base server with SQL server official package with the  Installation Account (Cloudz\AdminSQL) , we create and SQL Data base then we create  an ODBC  Connection from The composer Server to the data base server Step 1: ON the SQL Server, Management Studio – Security – Logins – Add new login Step 2: Create SQL local sa account as shown below. VMWare composer doesn’t support domain accounts only sql local accounts are supported.
Step 3: Right click on databases – New Database Step 4: Provide new database name and click on owner and select the sa account created

8. Installing Horizon Composer Server

On the composer windows server and Click on windows admin tools  ODBC Then create an ODBS Link to the Data Base server created in the Step 7
verify that you are logged in with valid domain service account.           
provide the DSN name created earlier – sql sa account name and password – click next

8. Unified Access Gateway (UAG) deployment

The configuration of the Unified Gateway UAG will be presenter in septated BLOG entry with a LOAD balancer solution , UAG In part 2nd part of this LAB, I will Explain how to configure to Horizon View and Desktop Pools

Server Virtualization

Money doesn’t grow on trees and neither does your business. It’s time to get smart about server utilization. Virtualization can deliver value to your company by reducing the number of physical servers needed to run your workloads. Consolidating workloads conserves server resources and ultimately leads to a reduction in IT spending. Additionally, a well designed virtualization strategy provides the benefits of:
  • High Availability
  • Disaster Recovery
  • Decreased backup windows
  • Fast provisioning of new servers
  • Better server management
  • Meeting corporate green initiatives.
Learn about the immediate ROI value of Server Containment, the ongoing savings that can be maximized with Server Consolidation, and the risk reduction from Virtualized Disaster Recovery and Business Continuity.
Are You Ready to Get Started?

Server virtualization offers tremendous value and a return on investment for most organizations in less than 1 year, but there is risk in migrating production servers to virtualization.  These risks can be managed with:
  • The proper planning of your datacenter virtual infrastructure
  • Assessing which servers are good candidates for virtualization
  • Designing the storage, servers, switches, and software
  • Evaluating your software options – VMware ESX, Citrix XenServer, Microsoft Hyper-V
  • Properly Implementing the solution
  • Measuring the results of an Energy Saving Datacenter with reduced IT spending on servers, Rack Space, and Server Management.
Virtualization Advisors has developed an ecosystem of tested and supported products and services called Server Virtualization Maximum that eliminates the risk, expense, time, and guesswork of developing a custom solution. Server Virtualization Maximum has been tested for compatibility, is fully supported, and tuned for maximum results.
  • Storage
  • Switches
  • Servers
  • Software
  • Services
You can select the combination that best suits your needs and budget while having the peace of mind that it will all work as designed.

Start a conversation with Virtualization Advisors if you are looking for a partner that can help guide you through the virtualization process. Use our professional consulting advice based on practical experience to maximize your savings and return on investment. You can leverage our expertise to avoid the pain and frustration that can arise from doing it yourself.