In the last years 2020/2021 I hade the chance to participate to #VMware #VMwareWorld Digital event online as a partner and take the benefits from Live sessions and Hand On Labs , In fact this year VMware has opened this World Big event to people interested to the data center virtualization and Cloud technologies from all the world because of the pandemic situation
In total I Have received two lovely VMware T-Shirt and cute Backpack with VMware Logo and VMware Label but the most cool think is Hiking kit that gives me give me idea start hiking
Also I have received a PIN Badge from to add it in my LinkedIn Profile and certification
The Agile Mindset values failure as an information source, they set a goal of constant leaning, they embraces challenges and respond to them with resilience and willingness to adapt until their objectives are met , in direct contacts, the Fixed mindset sees failure as simply failure. for them, the goal is always to appear successful, and failure makes them feel incompetent, they try to avoid challenges
DevOps is a collaborative approach to software development, characterised by mature agile delivery. Itspurpose is to minimise the time and resources taken to deliver software.
DevOps uses the following foundations :
Kaizen (continuous improvement) philosophy
Cost/benefit analysis
Automation
Systems thinking/collaboration
Note: There is a strong overlap/synergy with agile, although agile can also be used within a more traditional structure.
Agile Methodology
The agile delivery incorporates a mix of practices and processes to deliver software and other products in a way that is adaptive,incremental and iterative. This in turn supports early business value and low cost change.
Agile delivery incorporates a mix of practices and processes to deliver software and other products in a way that is adaptive, incremental and iterative to support early business value and low cost change.
Agile Values:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Developing a solution
When developing a solution within an Agile project, a number of key practices are followed to ensure fast and continuous feedback on the quality of the evolving product. Some of the common key practices have been listed below.
Automation
Automate as much as possible! Automating tasks within an agile project helps as they are often repeated numerous times.
Examples include:
Continuous Integration: Code is continually integrated into the existing product (build) and tested to ensure it integrates. This allows the quality of the build to be maintained.
Automated Testing: It helps to reduce risk and manual effort, and also provides early feedback on the quality of the product being developed. This can be used for unit testing, integration testing and functional testing
Quality
Since it isn’t possible to exhaustively test everything at the end, we should build in quality at each step. This means the Development Team is collectively responsible for producing code that works and could be released to production.
Collaboration The best code comes from teams who collaborate regularly to discuss the best ways to build the system and check each other’s work. For example, eXtreme Programming (XP) values collaboration and hence, has established Pair Programming as a practice to be used for technical work.
Continuous testing Agile methods expect the team to test their code as they write it. Test Driven Development (TDD) is where a Developer writes a test first and then writes the code to make the test pass. This helps to focus on design and coding, precisely on what the code must do and ensuring that it does it.
Refactoring of the code This refers to redesigning the existing code, after it has been implemented. If it is discovered that the initial design can be improved to reduce future maintenance or enable reusing the code with new Backlog Items, the code should be refactored.
In the last part, I am covering the Load Balancer configuration , There are no specific requirements for selecting a load balancer platform for vRealize Automation and Identity Management clusters . Majority of Load Balancers available today support complex web servers and SSL.
F5 LTM Local Traffic manager was chosen for this LAB due to its ease of deployment, popularity , stability, capability handling SSL sessions, and performance. but other Load balancers maybe be selected like Kemp the the Open source HA proxy LB, Following are some of the parameters that should be considered for configuring load balancers:
FQDN/IP addressee has been created for booth vRA and IDM on the DNS server
Root self signed or custom CA certificate has been created for both vRA IDM
Since we will Pass trough SSL , so No need to import the certificate to the Load balancer , in case of client configure SST terminated in Load balancer other Steps are needed
For more information about configuring F5 LTM and Integration with VMware vRealize products please refer to VMware/F5 documentation
Workspace One IDM F5 LB VIP configuration
Configure Custom Persistence Profile
Persistence Profile are custom profile created on LTM in order to manger the sessions and cookies
Log in to the LTM and select Local Traffic > Profiles > Persistence.
Click Create custom persistence profile WS1-persistance with the following parameters
Configure Health Monitors
Create a few Health Monitors to ensure all URLs are checked properly for availability
Log in to the LTM and from the main menu select Local Traffic > Monitors.
Click Create and provide the Health Monitor WS1-Monitor with the fowling setting
Put the GET /health HTTP/1.0\r\n\r\n , Interval 3 and Timeout 10 and leave the default
Configure Server Pools
Server Pools are used to contain the pools of members or nodes that will be receiving traffic
Log in to the LTM load balancer and select Local Traffic > Pools. then click create WS1-Pool
select the Health Monitor Profile WS1-Monitor created in the the last step
Enter each pool member Ip addresse as a New Node and add it to the New Members
Configure Virtual Servers
Virtual servers contain the virtual IP address (VIP) for the pools of nodes that will be accessed
Log in to the LTM load balancer and select Local Traffic > Virtual Server . then click create
create a Virtual server profile for IDM WS1-VS-443 as shown in the below picture
Select as Performance (layer 4 ) not Layer 7 since we are using the Pass Through method
vRA F5 LB VIP configuration
Similarly like the IDM Load balancer , now we configure the vRA profiles on F5 LTM
Configure Health Monitors
Create a few Health Monitors to ensure all URLs are checked properly for availability
Log in to the LTM and from the main menu select Local Traffic > Monitors.
Click Create and provide the required health monitor vRA as shown in the following picture
Configure Server Pools
Server Pools are used to contain the pools of members or nodes that will be receiving traffic
Log in to the LTM load balancer and select Local Traffic > Pools. then click create
select the Health Monitor Profile named vRA–health created in the the last step
Enter each pool member as a New Node and add it with IP address to the New Members
Configure Virtual Servers
Virtual servers contain the virtual IP address (VIP) for the pools of nodes that will be accessed
Log in to the LTM load balancer and select Local Traffic > Virtual Server . then click create
create a Virtual server profile for vRA-virtualServer as shown in the below picture
Select as Performance (layer 4 ) not Layer 7 since we are using the Pass Through method
select the vRA-Pool created in the previews setup
Lastly check that the virtual servers created before for both VRA and IDM are UP (green )
and the bring up to the end of the deployment of vRA Automation through Life Cycle management
In the next Blog Posts we will discuses the Initial configuration of vRA Automation and Load balancer
Arrived at the last stage of the vRealize Automation deployment , we discuss the deployment scenario for deploying the vRa tenant from the Life Cycle Manger LCM and the integration of VRA with VMware identity management IDM
Before to start the deployment assure that the prerequisite of IP address and FQDN are ready and to create a Root Self Signed or a Custom CA Singed certificate for the vRA deployment
Similarly to vIDM we start by creating a vRA certificate , by creating or importing a product certificate in the locker with all of the existing and new product component host names and the load balancer host name.
Detailed Steps
From the LCM Home page select create Environment and fill the information about the vRA environment name , Admin account , password and destination data center
Next stage if the VRA package version selection and deployment type (Standard or cluster ) not the OVA of the vRA could be downloaded online from my vmware using vLCM or could be downloaded manually then copied the the data folder in the LCM Virtual machine
Before to start the to obtain a license key from VMware
Next Select the vRA certificate created before
Next select the destination data center
PS: it is possible to install the VRA in a different data center different to the one used for LCM/IDM , in case of client want, just to add the new vCenters of the next data centers menu from the LCM
Provide Network details for vRA environment , Network VLAN, DNS, NTP
During the deployment using the NTP we had some issue and problems , from my personal experience use the Host time synchronization and to postpend the NTP after the end of deploy
Provide IP Configuration details for vRA VIP FQDN . a couple of things to remember here, Carefully look at option SSL Terminated at Load Balancer (we will discuss Load balancer Part V)
Tick this check box, if using Layer 7 load balancer
Untick this check box, if using Layer 4 TCP load balance
Next is to Provide IP Configuration and FQDN and VM names of for vRA Nodes 03 Nodes
Review, your configuration and press Submit to start deployment.
The deployment start and take about an hour to finish the deployment
And new vRA deployment is up and running. This conclude our deployment part
Now you can start the customization of the vRA from the VIP FQDN https://vra-fqdn/ or the the catalogue item n the VMware IDM Portal using the local admin account password
Next step is the integration of the vDIM with the Active Directory and d starting the Qucck Start configuration of vRA that wee will discus in details in futures posts
After finished the deployment of the first component vrLCM n we start can the operations on it, and the first step is to Scale the identity Manager IDM node already deployed in Part II from a standard to a clustered deployment
It was possible to skip the deployment of of the Identify management first Node during the deployment of LCM then postpend the creation of the IDM cluster from scratch until this setup but the recommended way is to start with one Node then to scale to cluster of three nodes
before to Start the operation it is necessary to create to Virtual server for the VIP address on the Load Balancer, in part V we discuss in details the needed steps for both IDM and VRA
Note ; As explained in part I , we will use TCP based load balancer (Layer 4) SSL passthrough is selected , if SSL inspection is used instead (Layer 7 ) ensure that SSL terminated in load balancer is checked during deployment
Generate a IDM Self Signed Certificate
Click on the Locker icon on LCM menu
Generate in order create a new Self signed certificate for the IDM cluster
Fill in the form the requested information of the certificate
certificate Name , Owner, Organization, Country code, FQDN name of the 03 nodes with the VIP and the IP address of the 03 nodes with the VIP (please to follow the correct order FQDN/IP )
After creating the certificate the first step before to start the scale up operation is th apply the created certificate to existing IDM Node
from LCM menu go to Environments and select Global environment Workspace(the default deployed during the LCM start up )
from the three dots ( … ) in the up right side select Replace certificate
It show the current self signed certificate created only for one node during the deployment of LCM that must be changed to IDM clustered certificate created below
select the the new cluster certificate and wait for LCM to apply it to the current node , the procedure take a while and don’t forget to tick the take snapshot tick to recover it in case of
IDM scale Up and Cluster creation
After applied the new IDM certificate now it is time to the the cluster creation
from the LCM menu environment — global environment select the IDM workspace deployment
Cluster on the (+) ADD component to add the remaining Node for the cluster
An Information alert show to synchronize of IDM components before the operation
the next step step show the overal configuration o the vDM and ask to check the snapshot button creation before the operation
At this step , provide the VIP FQDN for the IDM cluster ( notice that in the LCM deployment for the IDM first Node we used the first node FQDN)
the VIP IP addressee is configured on the Load balancer so don’t confuse it with the requested IP address bellow it is a secondary IP adders for th PostgreSQL Database cluster
Click on the Ad component button to add the Nodes
Provide VM names , IP configuration and FQDN names for the second and third IDM node respectively
the next screen show the required steps need to follow before the validation step regarding the SSL certificate, load balancer NTP , FQDN as explained before
the deployment process of the IDM cluster creation start
in step an error message is appear , in fact it is a normal behavior because the first IDM node has been deployed with its FQDN and now we are in a cluster so the FQDN must be changed to point the the cluster VIP FQDN accordantly
when we click retry it show as the requited FQDN cluster FQDN needed
from the IDM1 Node web Pages select Configuration , it redirected you to a second web Page and ask you for the default installation Password
From the configuration menu select the Identity manger FQDN and replace the idm1 FQDN wiht the cluster idm FQDN an click Save
the validation take a while and the installation process could be now restarted
Once the deployment of the cluster is finished then we can check from the LCM menu that the IDM cluster has been deployed with 03 connecters
These connectors will be used later during the synchronization with Active directory
Now the Workspace identity management cluster has been deployed we can start the conjugation of the Active directory integration that will be discussed in future Posts
In the Previous par (Part I ) we we have seen the overall architecture design and components of the vRealize suite and the deployment scenarios (standard and clustering )
In this section we will check the deployment of the first components vRealize Life Cycle Manager , In this scenario we will deploy only one Identity Management appliance appliance and we will show how to extend it to a IDM cluster in Part 3
Detailed Steps
before to start the deployment Ensure, to use DNS servers which can properly resolve DNS records.
Download LCM Easy Installer ISO from my vmware web site and mount it as virtual CD/DVD. Go to desired OS folder and run installer.exe.
E.g. In my case it was Windows machine, so i went to E:\vrlcm-ui-installer\win32\installer.exe
Click Install to Start New Deployment or Migrate to migrate from earlier versions of vRrealize
Accept the EULA license , then Provide target details where vRSLCM, vIDM and vRA appliances will be deployed. vCenter FQDN user , password
Select Target VM folder location , the cluster and the data store type where to place appliance VMs.
At this step, provide common network details which will be used for all three types of appliances which are vRSLCM, vIDM, vRA.
Network, IP Assignment, Subnet Mask, Default Gateway , DNS Server , Domain Name
Network settings
Provide default password which will be used by Lifecycle Manager for installations.
Provide IP Configuration details for vRealize Life Cycle Manager appliance ,FQDN and IPs must be created onf DNS before the startup of the deployment
Configure IP/FQDN settings for vIDM. You can skip installation of vIDM here, if you want to deploy clustered vIDM appliances. vRSLCM by default deploy single vIDM appliance.
in part III we will configure the 03 clustered vIDM appliances
You ma Skip the installation ofr vRealize automation in this stage , since we will deploy it in poste LCM deployment in Part 4 ( of course you can deploy it if you want in this step )
Review, your configuration and press Submit to start deployment
You can try to look at progress in real time https://<FQDN-of-vRSLCM>/vrlcm After vRLCM is powered up, login into it. Go to requests. You will be able to see different environment Globalenvironment – vIDM deployment
In the next blog post we will in details Deploy vRealize Identity Manager Nodes in Cluster. (Part III )
Its been long time that I had not updating this Blog, I was busy with preparing some certifications and clearing theme . I had been updating a deployment LABs with latest version of vRealize suite 8.6 (vRa, vRO, VROPS and vRli) to prepare for certificates and made some PoCs for futures projects
In this Blogs post, we will look at Architecture of vRA 8.x, High Level Steps and detailed step by step clustered deployment of vRealize Automation 8.6. Also, we will cover issues and problems encountered during deployment.
vRealize suite Architecture :
vRealize Automation vRA 8.x architecture has been changed from previous 7.x deployments. Now, there is no separate deployment of IaaS Manager or Web Server, SQL Database, Agents VMs etc. but based on Kubernetes containers in one VM or 03 VMs in cluster based deployments
New components are now integral part of vRealize automation deployment based on Life Cycle managements the first three components are related the to VRA automation,
vRealize Lifecycle Manager
VMware Identity Manager
vRealize Automation
vRealize Operation vROPS
vRealize Logs Insight
vRealize Business Inside
vRealize Network inside
There is two methods in deployment vRealize Automation products
Easy Installer method to perform automated deployment of vRSLCM, vIDM and vRA. it provides you with a functionality to install vRealize Automation 8.x with minimum steps.
Manual installation method of vRealize Automation through OVA or ISO, but it is not supported.
here are a list of components as shown from vRealize Licfe Cycle Management
Deployments Types
Like in previous versions There is either standard deployment or clustered deployment to support small or large environments accordingly. Let’s take a brief look at these deployments:
Standard deployment
In small deployments, only single instance of vRealize Life cycle Manager, Identity Manager and Automation appliance is deployed. There is no need for load balancer VIPs.
Clustered deployment
Under large deployment, high availability of environment is achieved through multiple instances of vRealize Identity Manager and vRealize Automation Appliance behind load balancer VIPs.
Clustered deployment of Vra Automation
Product
High Availability Support
vRealize Lifecycle Manager
vRealize Lifecycle Manager does not support a highly available deployment.
VMware Identity Manager
Content in replicated in a VMware Identity Manager cluster. Deploy a cluster behind a load balancer to enable high availability.
vRealize Automation
Content is replicated in a vRealize Automation cluster. Deploy a cluster behind a load balancer to enable high availability.
In this LAB will focus on the clustered deployment since it is the most used in production environments and has many issue and tips and to take in consideration
Below components are required for large deployment:
Identity Manager Appliance Load Balanced VIP
vRealize Automation Appliance Load Balanced VIP
vRealize Lifecycle Manager Appliance
vRealize Identity Manager Appliance x 3
vRealize Automation Appliance x 3
As reference and during this LAB we will use the following Naming and IP addresses that must be created and checked on reachable DNS server
A static IPv4
A FQDN that can be resolved both forward and in reverse through the defined DNS server.
Service/Role
Node IP address
Nodes FQDN
VIP IP
VIP FQDN
VMware Life Cyle Manager
192.168.253.85
lcm.cloudz.local
VMware Identity Management
192.168.253.81
idm1.cloudz.local
192.168.253.80
idm.cloudz.local
192.168.253.82
idm2.cloudz.local
192.168.253.83
idm3.cloudz.local
vRealize Automation
192.168.253.86
vra1.cloudz.local
192.168.253.89
vra.cloudz.local
192.168.253.87
vra2.cloudz.local
192.168.253.88
vra3.cloudz.local
Sizing
The following system resources are required to install the vRealize Automation 8.X Appliances.
PRE-REQUISITES LARGE DEPLOYMENT
For clustered deployments, Load Balancer is a must. I used F Big IP LTPM virtual edition because it has an official procedure for vRealize suite products
if using TCP based load balancer, ensure SSL passthrough is selected , if SSL inspection is used instead ensure that SSL terminated in load balancer is checked during deployment.
DNS resolution should be working fine. All A and PTR records must be created and should resolve.
NTP should be in sync. Better to use NTP server, but ensure there is no time drift between ESXi hosts if using host based time sync.
Deployment Steps
We spread the deployment of the components on 04 separated parts, in each part part we will discuss in details the deployment and issues encountered
In the next Poste will start with he first component and follow step by step of the the vRealize suite components from the Easy Installer ISO packages that you can downloads it from the official My VMware download page.
Quick Graphic that details the minimum number of nodes, capacity usage for each FTT- Failure to Tolerate and FTM – Failure to Tolerance Method within VMware vSAN.