vCloud Availability 3.0 – Working with the vCAV API

This entry is part 6 of 6 in the series vCloud Availability 3.0

One of the most welcome additions in vCloud Availability (vCAV) 3.0 is a public API which exposes much of the platform capability to automation and orchestration. In particular for Service Providers it is possible to relatively easily extract statistics on the number of replicated VMs, the type of replication (ongoing protection or one-off migrations), the storage consumption occupied by Point-In-Time instances and replication status.

It is also possible (I have yet to test this) to enable full configuration and re-configuration of replication via the vCAV API and this is something definitely on my list to test further.

VMware provide 2 python scripts in the vCAV appliances (usage-report.py and storage-report.py) which provide some good base information, but having to login to the appliances and run these locally under the vCAV appliance root user account isn’t ideal – as a Service Provider I’d like to be able to remotely interrogate the API and retrieve information on configured replications for billing and service monitoring purposes.

VMware has published the vCAV public API specification at https://code.vmware.com/apis/441/vcav but at this stage I’m unsure of the exact status of this – some conversations with VMware staff have indicated that this is not ‘officially released or supported’. Undeterred I decided to see what could be done to consume this API and write a small PowerShell module to make it easier to consume the API using vCloud Director session credentials rather than relying on ‘root’ user access to the appliances themselves.

Note: I have definitely noticed some inconsistencies between the API usage in the VMware scripts and what is currently documented on code.vmware.com. In some cases this prevented API calls from working and I had to reverse-engineer the calls from c4cli.py from the appliances to get the correct syntax. This may also explain why the public vCAV API is not yet officially supported.

The results of my experimentation and development have been published to github here: https://github.com/jondwaite/PowerVCAV and I’ve also made the PowerVCAV module available in PowerShell Gallery so that it can be easily installed using the PowerShell Install-Module cmdlet. Note that PowerVCAV relies on connection information from PowerCLI and the Connect-CIServer cmdlet so this is required.

PowerVCAV consists of 6 cmdlets to assist in managing vCAV session connections and allow easy querying and consumption of the vCAV API.

I’ve included documentation for each of the cmdlets in the github repo readme, together with some examples of the connection process and syntax.

Hopefully this module will prove useful to others who need to work with the vCAV API and provide a foundation for being able to build queries against this.

As always, comments and feedback appreciated, and if you have any suggestions for improvements feel free to log a request against the github repo.

Jon

vCloud Availability 3.0 – Protecting & Migrating VMs

This entry is part 5 of 6 in the series vCloud Availability 3.0

In the 5th part in this series of posts on vCloud Availability 3.0 I wanted to try something new, so I’ve made a short (~10 min) video showing the configuration of VM replication between an on-premise vSphere environment and a Cloud Provider service using the vSphere client plugin and the vCloud Director tenant portal.

This is a bit of an experiment for me as an alternative to long pages of screen grabs so I’d love to know what you think and if I should do more in this format. The video was captured/uploaded at 1080p so probably best viewed in that quality to be able to read details properly.

I’m working on some followup videos showing how VM failover and failback works, and how to protect/migrate VMs between 2 Cloud Provider platforms – let me know what else you’d like to see too.

Jon.

vCloud Availability 3.0 – On-Premise Deployment & Configuration

This entry is part 4 of 6 in the series vCloud Availability 3.0

In the first 3 parts of this series I detailed the configuration and deployment of vCAv 3.0 into a service provider site. In this 4th part I show the deployment and configuration of vCAv into a tenant on-premise infrastructure. This allows appropriately configured tenants to protect on-premise VMs to a cloud provider as well as protect cloud-hosted VMs back to their own on-premise infrastructure.

In this configuration I will use the already-configured Christchurch lab cloud environment as the endpoint and configure vCAv into an on-premise infrastructure at my test ‘Tyrell’ tenant environment which is configured with its own small vSphere SDDC based on vCenter, ESXi and VSAN storage. Note that NSX networking is not required in the tenant site, and only outbound tcp/443 (https) connectivity is required between the tenant and cloud provider site which makes vCAv almost trivially easy for customers to deploy into their datacenters.

The VMware documentation page for deploying vCAv at a tenant/on-premise site is available here, the process below show the configuration of the vCAv tenant appliance and configuring the appliance once deployed to connect to a cloud provider site.

Download the OnPrem version of the vCAv appliance from my.vmware.com, note that this is different from the image used to deploy a Cloud Provider site and is listed under the ‘Drivers & Tools’ tab in my.vmware.com:

Login to the on-premise vCenter and select the option to deploy an OVF Template, select the downloaded OnPrem appliance:

Specify the VM name to be assigned to the vCAv OnPrem appliance and select the datacenter where it will be deployed:

Next select the vSphere cluster where the vCAv appliance will run:

The next screen allows you to review the template details prior to deployment:

You must agree to the VMware EULA agreement before proceeding:

Chose the datastore and storage policy for the vCAv OnPrem appliance deployment (in this example I only have a single VSAN datastore to select from):

Select the network for the vCAv appliance to connect on and the IP assignment type:

Assign the initial ‘root’ user password (as with the Cloud appliance, this will be forced to change on first login to the appliance UI) and configure appropriate networking settings – IP address, subnet mask, default gateway, DNS servers, DNS domain and NTP server:

Review the summary screen and click ‘Finish’ to initiate the appliance deployment:

Once the appliance deployment has completed, you can power it on and access the configuration UI at https://<appliance-deployed-ip> to continue with configuration. Once you have logged in and changed the ‘root’ user password you will be shown the screen shown below:

Click to run the initial setup wizard and enter a name for the on-premise site:

Enter the Lookup service address and authentication details for the local (on-prem) vCenter/PSC infrastructure (in this example I’m once again using vCenter with an embedded PSC):

Next specify the URI for the public API endpoint for the cloud-provider instance of vCloud Availability (not the vCloud Director public endpoint) and provide login credentials to the tenant organization within that cloud environment, you will need to confirm the provided SSL certificate as the connection is made.

Note: If you select the ‘Allow Access from Cloud’ option, then administrators in the Cloud Provider will gain capability in the local vCenter/vSphere environment – here’s what the VMware documentation says on this:

“By selecting this option you allow the cloud provider and the organization administrators to execute the following operations from the¬†vCloud Availability Portal¬†without authenticating to the on-premises site.

  • Discover on-premises workloads and replicate them to the cloud.
  • Reverse existing replications to the on-premises site.
  • Replicate cloud workloads to the on-premises site.

By leaving this option deselected, only users authenticated to the on-premises vCloud Availability Portal can configure new replications and existing replications cannot be reversed from the vCloud Availability Portal.”

Make sure you understand the implications of selecting this option (or not selecting this option) when configuring the appliance and set it appropriately. In my lab environment I enabled the setting.

Confirm / deny participation in VMware CEIP in the next screen:

In the confirmation screen, you can use the slider to continue to the ‘Configure local placement’ dialogs on completion of the initial OnPrem appliance configuration. If you chose you can complete this process separately later, but in this example I chose to continue and configure local VM placement immediately:

The placement configuration allows you to select the environment that will be used by VMs replicated from the cloud to the OnPrem environment, in the first screen select the VM folder destination where the VMs will appear:

Next select the compute cluster where the VMs will be registered:

Next select the default network to which the replicated VMs will be attached:

Select the vSphere Datastore where the replicated VM disks will be stored:

Finally review the supplied details and complete the placement configuration:

The ‘Configuration’ tab in the appliance should now show the configured values as shown below:

Clicking the ‘System Monitoring’ tab should show connectivity to all services and to the remove Service Provider cloud:

Signing into the local vCenter environment should now display the banner (shown below) as the vCAv plugin is registered into the local vCenter:

Clicking the ‘Refresh Browser’ button and going to the ‘Home’ link in vCenter will now show a new menu entry for vCloud Availability:

vCloud Availability tab in vCenter HTML5 UI post-installation & Configuration of the OnPrem vCAv Appliance

Selecting the ‘vCloud Availability’ link in vCenter will open a new panel showing the vCloud Availability interface:

That completes the configuration of an on-premise connection to vCloud Availability – at this point we can now replicate VMs both to and from the Cloud Service Provider infrastructure to our own vSphere cluster. Although there are quite a few steps in the process, I hope you can see that the configuration of the OnPrem appliance is actually very straightforward and easy. A particular advantage compared to previous deployments with other products such as vCloud Availability and vCloud Extender is that no inbound firewall or NAT rules are required in the vCAv OnPrem configuration with v3.0.

This concludes the 4th part of this series looking at vCloud Availability 3.0, in the next part of the series now that we have both Cloud-to-Cloud (Parts 2 & 3) and OnPrem-to-Cloud (this part) configurations completed I’ll look at configuring VM replication protection and failover/migration of replicated VMs.

As always, corrections, comments and feedback welcome!

Jon

vCloud Availability 3.0 – Site Pairing & vCAv Policies

This entry is part 3 of 6 in the series vCloud Availability 3.0

The first 2 parts of this series covered the overall vCloud Availability (vCAv) architecture and the deployment and configuration of the vCAv appliances into a Cloud Provider site. Before continuing pairing sites and configuring VM replication policies, first check that all services are online and showing as healthy.

The easiest place to do this is the ‘System Monitoring’ screen in the vApp Replication Manager portal (in my lab this is https://10.207.0.44/ui/admin for the Auckland site and https://10.200.0.44/ui/admin for the Christchurch site). The resulting panels look like this (the ‘Local replicators’ tab has been expanded in both sites):

System Monitoring screens for both sites prior to site pairing

Note: If you have changed the SSL certificate for the vApp Replication Manager portal as mentioned at the end of my previous post, you may see the ‘Tunnel connectivity’ item showing red with a ‘requires authentication’ error. If so, simply access the configuration tab, click ‘Edit’ next to the ‘Tunnel address’ and provide the appliance password when prompted as shown in the screen below:

Re-authenticate Tunnel Service after SSL certificate change

Note: I had a number of instances in my lab setup where the ‘Network’ entry (arrowed & boxed in green above) for the vApp Replication Manager had changed to be the public URL for vCAv. If this occurs you will not be able to pair sites as the tunnel appliance will redirect the ‘management’ traffic back to itself. To fix this, edit the entry and point this back to the internal name/IP of the vApp Replication Manager appliance and re-enter the appliance password.

Based on my experiences in testing, I strongly suggest at this stage that you do not continue attempting to pair vCAv cloud sites until you have resolved any issues and have all System Monitoring links showing as Green/Ok. I had a number of issues in my early lab attempts to configure vCAv which would likely have been avoided if I’d done this…

You also at this point need to make sure that your public API endpoint link is added to your firewall and NAT configuration so that internet access to your vCAv public API address is passed to port 8048 on the Tunnel appliance. Once configured properly, accessing the public vCAv API address from a browser should show the vCAv portal login screen:

Checking that your public vCAv URI is accessible

Pairing vCloud Availability Cloud Provider Sites

To pair the vCAv sites, first logout of any vCAv portals and go to the user login at https:<IP-address-of-vApp-Replication-Manager>/ui/login (as opposed to /ui/admin). The username will show ‘user@org’ instead of the ‘root’ Appliance login presented from the /admin/ui portal. Login with your vCloud Director provider credentials (e.g. ‘administrator@system’).

Once logged in, selecting the ‘Sites’ option should show a screen similar to the following:

Site Pairing – Check that the endpoint URL matches your vCAv public URL

Click ‘New Pairing’ and provide the details for the 2nd vCAv site (Since I’m configuring the site pair on my Auckland site I enter the details for the Christchurch site to pair):

Use the vCAv public URL from the partner site

Accept the SSL certificate from the paired site and you should be able to successfully pair the sites, the Sites window should now look like this:

Site Pairing Completed

Note: It is only necessary to perform this pairing on one site providing you specify the remote appliance credentials, vCAv will automatically associate the sites in both directions.

If we select the 2nd (C00-Christchurch) site, a login button appears allowing us to authenticate as our vCloud Director provider admin user (administrator@system) to the 2nd site. After successful authentication the Sites tab will show us with a management session to both sites:

If you now check the ‘System Monitoring’ tab, you should see that both the local (to the site you are logged in to) and remote vCAv Replicators are shown and connected:

Note: The odd ‘Address’ shown for the Remote replicator (boxed in red above) is correct – this is an internal address used to reach the remote replicator appliance via the vCAv Tunnel.

vCloud Availability Policies

Now that we have 2 paired vCAv sites, we can configure policies and assign these to vCloud Director tenants to allow these tenants to configure protection for their VMs.

Again working in the vApp Replication Manager portal (https://10.207.0.44/ui/login in my lab environment) and signed in as our vCloud Director provider account we can see the vCAv policies under the ‘Policies’ tab. The default vCAv policy assigned to all vCloud tenants by default forbids any replications:

Creating a new vCAv Policy

Selecting the ‘New’ button allows us to configure a new policy:

Configuring a new vCAv Policy

Replication can be allowed in either direction and the maximum number of retained instances (snapshots) per VM replication can also be set between 1 and 24. The minimum allowed RPO (set to 4 hours in this example, but configurable from 5 minutes to 24 hours) prevents users of the policy configuring smaller RPOs than defined in the policy.

Note: You will need to configure and assign a policy in both sites (Auckland and Christchurch in my lab environment) to permit configuration of VM replication. Policies do not automatically replicate between cloud sites, neither do assignments of policies to organizations/tenants.

With the policy just created selected, we can now select the ‘assign’ link to add tenant organizations to the policy as shown below:

Asigning vCAv policy to Organization

The popup dialog that appears on clicking ‘Assign’ allows one (or many) vCloud tenant Organizations to be associated with the specified policy.

Note: A vCloud Organization can only ever be assigned to one vCAv policy at a time, if a tenant is assigned to a new policy any previous assignments from that tenant will be removed. In the screen below I assign the new policy to the ‘Tyrell’ tenant organization in vCloud Director:

Asigning vCAv Policy to a vCloud Tenant Organization

In order for the tenant to be able to configure VM replication, a policy must be defined and assigned in both sites to the organization (in order to have resources at both sites the tenant must also be defined and have virtual datacenter resources assigned to them in both sites). In my lab setup the ‘Tyrell’ organization has VDCs assigned in both sites and I have also configured an identical vCAv policy in the 2nd site and assigned it.

That is all that is required in order for tenants to be able to configure VM replication in their vCloud Director portal and perform migration and VM failover between sites.

In the next post in this series I will detail the configuration steps to link an on-premise vSphere environment to connect to a vCloud Service Provider site using vCAv.

vCloud Availability 3.0 – Cloud Deployment & Configuration

This entry is part 2 of 6 in the series vCloud Availability 3.0

Deployment Configuration

In my lab environment I have two SP datacenter locations (Auckland and Christchurch since I’m in New Zealand) and a complete vCloud infrastructure running in each location. I have defined the appliance names and IP addresses prior to deploying vCAv and registered these in DNS prior to starting deployment as this will simplify the configuration later. My lab sites happen to have network connectivity via a VPN, but this is not important for vCAv as all network communication between the sites will be via the Tunnel appliances and the external (public) network.

Note: This was one of the first issues that I encountered when building the environment – I assumed that replication traffic would be capable of using internal networking between the replicator appliances, but this is not the case in the current release of vCAv and all communication must use the Tunnel appliances’ public network.

In order to deploy vCAv into a production-like configuration 3 appliances are required in each vCloud site. Since my lab configuration spans 2 sites I will need a total of 6 appliances. While the vCloud Availability documentation has good documentation on deploying appliances in the vCenter UI, I found it much easier (and reproducible when testing) to use DOS batch file to deploy the appliances using VMware OVFTool. In my lab environment I defined the following names and IP addresses for the appliances:

Site 1 (Auckland):

Appliance NameDeployment TypeIP AddressAdministration URI(s)
vdev-a03-vcamcloud10.207.0.44vCA Replication Manager: https://10.207.0.44:8441/ui/admin
vCA vApp Replication Manager: https://10.207.0.44/ui/admin
vdev-a03-vcar01replicator10.207.0.45vCA Replicator: https://10.207.0.45/ui/admin
vdev-a03-vcattunnel10.207.0.46vCA Tunnel: https://10.207.0.46/ui/admin

Site 2 (Christchurch)

Appliance NameDeployment TypeIP AddressAdministration URI(s)
vdev-c00-vcamcloud10.200.0.44vCA Replication Manager: https://10.200.0.44:8441/ui/admin
vCA vApp Replication Manager: https://10.200.0.44/ui/admin
vdev-c00-vcar01replicator10.200.0.45vCA Replicator: https://10.200.0.45/ui/admin
vdev-c00-vcattunnel10.200.0.46vCA Tunnel: https://10.200.0.46/ui/admin

I then used 6 copies of the following file (saved with a .cmd extension on a Windows admin machine) to deploy the appliances, changing the variable assignments as appropriate – the example below deploys the ‘cloud’ appliance in the Christchurch site. Obviously if using this change the relevant parameters to suit your environment as well as the file locations of the ovftool.exe file and the vCloud Availability deployment .OVA file.

Note: The OVFTOOL syntax is extremely sensitive to syntax, so make sure you carefully check the entries provided. Also note that if any passwords contain certain special characters this can cause OVFTOOL issues (single and double quotation marks in particular) and you may need to use an alternative administrative account that does not have these characters in it’s password.

Note: If the appliances deploy but their consoles show that no networking is configured this most likely means that one or more of the parameters supplied are not in the correct format (in particular, don’t use single-quote marks around values as shown in the example deployment for Linux in the VMware documentation).

The script will create a log file ‘<VM name>-deploy.log’ in the folder it is run from showing the results of the ovftool command for troubleshooting.

@echo off

::Appliance deployment details:
SET DEPLOYTYPE=<One of 'cloud', 'replicator' or 'tunnel' (without ' marks) depending on appliance function>
SET VMNAME=<name for the VM>
SET VMIP=<IP address for the VM>
SET ROOTPASS=<Initial root password on the appliance - will be forced to change on first login>

::File locations for vCAv and OVFTOOL.EXE:
SET VCAIMAGE="%HOMEPATH%\Downloads\VMware-vCloud-Availability-3.0.0.3736-13174385_OVF10.ova"
SET OVFTOOL="C:\Program Files\VMware\VMware OVF Tool\ovftool.exe"

::Target vCenter:
SET VIHOST=<vCenter host name>
SET VIUSER=<vCenter admin user - e.g. administrator@vsphere.local>
SET VIPASS=<vSphere Password>
SET VILOCATOR=<vCenter Locator - e.g. C00/host/DEVCLU-C00>

::Storage & Networking for Appliance:
SET VMDS=<vCenter Datastore for appliance>
SET VMNET=<vCenter Network name for appliance>
SET NTPSERV=<NTP Server IP address for appliance>
SET DNSSERV=<DNS Server(s) for appliance - comma separated>
SET DNSDOMAIN=<DNS Domain Name for appliance>
SET IPGATEWAY=<Default IP Gateway for appliance>
SET IPNETMASK=<Subnet Mask for appliance network>

%OVFTOOL% --name="%VMNAME%" --datastore="%VMDS%" --acceptAllEulas^
 --powerOn --X:enableHiddenProperties --X:injectOvfEnv --X:waitForIp^
 --ipAllocationPolicy=fixedPolicy --deploymentOption=%DEPLOYTYPE% --machineOutput^
 --noSSLVerify --overwrite --powerOffTarget "--net:VM Network=%VMNET%"^
 --diskMode=thin --X:logFile=%VMNAME%-deploy.log --X:logLevel=verbose^
 --prop:guestinfo.cis.appliance.root.password=%ROOTPASS%^
 --prop:guestinfo.cis.appliance.ssh.enabled=True^
 --prop:guestinfo.cis.appliance.net.ntp=%NTPSERV%^
 --prop:vami.DNS.VMware_vCloud_Availability=%DNSSERV%^
 --prop:vami.domain.VMware_vCloud_Availability=%DNSDOMAIN%^
 --prop:vami.gateway.VMware_vCloud_Availability=%IPGATEWAY%^
 --prop:vami.ip0.VMware_vCloud_Availability=%VMIP%^
 --prop:vami.netmask0.VMware_vCloud_Availability=%IPNETMASK%^
 --prop:vami.searchpath.VMware_vCloud_Availability=%DNSDOMAIN%^
 %VCAIMAGE%^
 "vi://%VIUSER%:%VIPASS%@%VIHOST%/%VILOCATOR%"

As the syntax is so fiddly, I’ve included a (working) example of the script used to deploy the ‘cloud’ appliance in the Christchurch site below unedited apart from password redaction:

@echo off

::Appliance deployment details:
SET DEPLOYTYPE=cloud
SET VMNAME=vdev-c00-vcam
SET VMIP=10.200.0.44
SET ROOTPASS=<Redacted>

::File locations for vCAv and OVFTOOL.EXE:
SET VCAIMAGE="%HOMEPATH%\Downloads\VMware-vCloud-Availability-3.0.0.3736-13174385_OVF10.ova"
SET OVFTOOL="C:\Program Files\VMware\VMware OVF Tool\ovftool.exe"

::Target vCenter:
SET VIHOST=vdev-c00-vc01.vdev.local
SET VIUSER=administrator@vsphere.local
SET VIPASS=<Redacted>
SET VILOCATOR=C00/host/DEVCLU-C00

::Storage & Networking for Appliance:
SET VMDS=CHC-VSAN-Perf
SET VMNET=CHC-Mgmt
SET NTPSERV=10.200.0.20
SET DNSSERV=10.200.0.10,10.207.0.10
SET DNSDOMAIN=vdev.local
SET IPGATEWAY=10.200.0.1
SET IPNETMASK=255.255.255.0

%OVFTOOL% --name="%VMNAME%" --datastore="%VMDS%" --acceptAllEulas^
 --powerOn --X:enableHiddenProperties --X:injectOvfEnv --X:waitForIp^
 --ipAllocationPolicy=fixedPolicy --deploymentOption=%DEPLOYTYPE% --machineOutput^
 --noSSLVerify --overwrite --powerOffTarget "--net:VM Network=%VMNET%"^
 --diskMode=thin --X:logFile=%VMNAME%-deploy.log --X:logLevel=verbose^
 --prop:guestinfo.cis.appliance.root.password=%ROOTPASS%^
 --prop:guestinfo.cis.appliance.ssh.enabled=True^
 --prop:guestinfo.cis.appliance.net.ntp=%NTPSERV%^
 --prop:vami.DNS.VMware_vCloud_Availability=%DNSSERV%^
 --prop:vami.domain.VMware_vCloud_Availability=%DNSDOMAIN%^
 --prop:vami.gateway.VMware_vCloud_Availability=%IPGATEWAY%^
 --prop:vami.ip0.VMware_vCloud_Availability=%VMIP%^
 --prop:vami.netmask0.VMware_vCloud_Availability=%IPNETMASK%^
 --prop:vami.searchpath.VMware_vCloud_Availability=%DNSDOMAIN%^
 %VCAIMAGE%^
 "vi://%VIUSER%:%VIPASS%@%VIHOST%/%VILOCATOR%"

Once the appliances are deployed and started, signing into the admin URI listed in the table above first forces a password change for the root appliance user which must be completed on each appliance.

Note: The ‘root’ account is common between the 2 sites which run on the ‘cloud’ (vApp Replication Manager) appliance so only needs to be changed once here:

Changing appliance root password

The VMware documentation has very good guides for configuring the appliances once deployed, I’ve included screenshots below at each step showing the relevant steps. I’ve shown the generic (documentation) URI and the specific URI in my lab for the Auckland site for each step as it can get confusing which administrative console you should actually be using in each step. I’ve also linked each step to the relevant section of the VMware documentation to make it easier to follow.

Step 1 – Configure vCloud Availability Replication Manager
Admin Link: https://<vApp-Replication-Manager-IP-address>:8441/ui/admin
Lab Link: https://10.207.0.44:8441/ui/admin (vdev-a03-vcam)

Since my lab uses vCenter servers with embedded Platform Services Controllers (PSC), the Lookup Service address is actually on the vCenter server. You will need to confirm the Lookup Service certificate to configure this setting.

Step 1 – Configured Lookup service address in vCAv Replication Manager

Step 2 – Configure a vCloud Availability vApp Replication Manager
Admin Link: https://<vApp-Replication-Manager-IP-address>/ui/admin
Lab Link: https://10.207.0.44/ui/admin (vdev-a03-vcam)

Step 2 – Run intial setup wizard
Step 2 – Enter site name and public API endpoint

Note: The Public API endpoint in this dialog should be set to the public DNS name which will be eventually used to access vCAv from the internet by your tenants. This should be different to the URI used to access the vCloud Director portal. (e.g. ‘akl.vca.cloudprovider.com:443’)

Step 2 – Configure connection to Lookup service (accept certificate)
Step 2 – Configure vCloud Director API endpoint (accept certificate)
Step 2 – Enter vCAv 3.x License key
Step 2 – Confirm participation in CEIP
Step 2 – Completion / summary screen

After completing the wizard, clicking the ‘System Monitoring’ tab should show a screen similar to the one shown below, at this stage the two warnings for Tunnel connectivity and Configured replicators are normal/expected as we haven’t completed these steps yet.

Step 2 – Post initial setup wizard

Step 3 – Configure vCloud Availability Replicator Appliance
Admin Link: https://<vApp-Replicator-Appliance-IP-address>/ui/admin
Lab Link: https://10.207.0.45/ui/admin (vdev-a03-vcar01)

Step 3 – Configure Replicator Lookup Service

Once configured (and the certificate accepted), you should see the Replicator appliance System Monitoring screen similar to below:

Step 3 – Replicator appliance with Lookup service configured

Step 4 – Register a vCloud Availability Replicator with a vCloud Availability Replication Manager in the Same Site
Admin Link: https://<vApp-Replication-Manager-IP-address>:8441/ui/admin
Lab Link: https://10.207.0.44:8441/ui/admin (vdev-a03-vcam)

Step 4 – Select ‘Replicators’ option then ‘New’
Step 4 – Completing the New Replicator settings

Note: Configure port 8043 on the replicator appliance – the VMware documentation shows port 8440 for this (presumably from a ‘combined’ appliance deployment). When you click ‘Add’ you will need to accept the certificate from the Replicator appliance.

Step 4 – Replication Manager with Replicator appliance added

Step 5 – Configure vCloud Availability Tunnel
Admin Link: https://Tunnel-Appliance-IP-address>/ui/admin
Lab Link: https://10.207.0.46/ui/admin (vdev-a03-vcat)

Step 5 – Configure Lookup service on Tunnel Appliance

After configuring the Lookup Service, check that the System Monitoring tab shows connectivity:

Step 5 – Checking Tunnel Appliance Lookup service connectivity

Step 6 – Enable vCloud Availability Tunnel
Admin Link: https://<vApp-Replication-Manager-IP-address>/ui/admin
Lab Link: https://10.207.0.44/ui/admin (vdev-a03-vcam)

Step 6 – vApp Replication Manager console

Selecting the ‘Configuration’ tab brings up the following screen:

Note: If you are placing the Tunnel appliance behind a NAT firewall (recommended) and using DNAT port-translation from tcp/443 (externally) to 8048 (internally on the Tunnel appliance), you should click ‘Edit’ on the ‘Public API endpoint’ and update this to reflect the external port (443) at this stage. This configuration allows tenants/users to see the vCAv portal externally on port 443 and prevents them needing to open any additional outbound firewall ports.

Step 6 – Edit Tunnel settings
Step 6 – Configuring the Tunneling settings

Accept the certificate when prompted to save the tunnel configuration.

Step 7 – Restart Services
Admin Link: https://<vApp-Replication-Manager-IP-address>/ui/admin
Lab Link: https://10.207.0.44/ui/admin (vdev-a03-vcam)
Admin Link: https://<vApp-Replicator-Appliance-IP-address>/ui/admin
Lab Link: https://10.207.0.45/ui/admin (vdev-a03-vcar01)

As mentioned in the VMware documentation and in the warning on the tunnel configuration dialog shown above, you must now restart all vCAv services on the local site vApp Replication Manager and Replicator appliances – simply login to each appliance and under ‘System Monitoring’ click the ‘Restart Service’ button:

Step 7 – Restart Services

When accessing vCloud Availability inside the vCloud Director portal, the SSL certificate used to render the plugin data will originate from the vCloud Availability vApp Replication Manager portal. For this reason, it is a good idea at this stage to replace the self-signed certificate generated when the appliance is deployed with a ‘proper’ SSL certificate which is registered to the public URI that vCAv is using.

e.g. If the Public API for vCloud Availability is ‘akl.vca.cloudprovider.com’ then you should reconfigure the vApp Replication Manager portal to use an SSL certificate which is valid for akl.vca.cloudprovider.com.

The process to reconfigure the SSL certificate in the vApp Replication Manager portal is described in the VMware documentation.

Important Note for Wildcard SSL Certificates:If you are using wildcard SSL certificates (e.g. *.cloudprovider.com), you CANNOT use these when configuring the vApp Replication Manager portals in multiple Service Provider sites. This is because the site-pairing operation checks the SSL certificate thumbprint being used in each site and will refuse to pair sites if the same thumbprint is detected at both sites. Use dedicated SSL certificates at each site when configuring multiple vCAv cloud endpoints.

The next part of this series will detail pairing the 2 deployed Service Provider instances deployed and how VM replication policies can be defined and assigned to cloud tenants to allow them to start protecting their VMs.

vCloud Availability 3.0 – Introduction

This entry is part 1 of 6 in the series vCloud Availability 3.0

VMware has recently released version 3.0 of vCloud Availability (vCAv) (Release Notes) which allows vCloud Service Providers to offer a variety of VM protection and migration services to their tenant customers. vCAv 3.0 combines features previously available in 3 separate VMware products (vCloud Availability Cloud-to-Cloud DR, vCloud Availability for vCloud Director and vCloud Extender) and allows:

  • Protect/replicate and failover VMs to/from on-premise vSphere environments to a vCloud Service Provider.
  • Protect/replicate and failover VMs between 2 virtual datacenters provided by a vCloud Service Provider (these would generally be in 2 distinct geographic locations).
  • Migrate VMs to/from on-premise vSphere environments and a vCloud Service Provider.
vCloud Availability 3.0 Functions (Image is (c)VMware 2019)

vCloud Availability 3.0 (vCAv) also supports advanced functionality usually reserved for products such as VMware Site Recovery Manager (SRM) such as allowing VM network information to be changed during failover to ensure VMs can connect to the destination network when failed-over or migrated. The tenant administrative portal is tightly integrated into VMware vCloud Director allowing full control of VM replication tasks in the same interface used by tenants to administer their virtual machines.

Service Providers can define policies and apply these on a per-tenant basis to control items such as:

  • How many customer VMs can be replicated (a fixed number of VMs or ‘unlimited’).
  • What the minimum configurable RPO interval is for VM replication (as low as 5 minutes for vSphere 6.5+ environments and up to 24 hours).
  • How many snapshots of each VM can be retained (from 1 to 24).
vCloud Availability Policy Definition

Since the release of vCAv 3.0 I’ve been deploying and testing the solution components, this is the first part in a series of posts is designed to emulate a complete ‘real-world’ deployment consisting of 2 distinct cloud provider sites and a ‘customer’ on-premises infrastructure so I can detail all of the deployment, configuration and end-user usage scenarios across these.

To configure a production-realistic environment, I have deployed separate vCAv appliances for the ‘cloud’, ‘replicator’ and ‘tunnel’ functions, a typical service provider network diagram with the ports used by vCAv for communication is shown in the diagram below. Note that in an actual production implementation the ‘tunnel’ appliance would generally be deployed into a DMZ network with the ‘cloud’ (Replication Manager and vApp Replication Manager) and ‘replicator’ appliances deployed into the Service Provider management network.

vCloud Availability 3.0 Network Architecture & Ports

This concludes the first post in this series, in future posts I aim to cover:

  • Deployment and configuration of vCAv appliances into a Cloud Service Provider
  • Pairing Cloud Provider Sites, Defining VM replication policies and assigning these to tenants
  • On-premise deployment and configuration into a customer vSphere cluster
  • Protecting / replicating VMs from Cloud to Cloud, On-Premise to Cloud and Cloud to On-Premise (migration, failover and failback)
  • Monitoring and Troubleshooting vCloud Availability services
  • Conclusions, References and further reading

As always, corrections, comments and feedback are always appreciated.

Jon.