vCloud Director Extender – Part 5 – Stretch Networking (L2VPN)

In this 5th part of my look into vCloud Director Extender (CX), I deal with the extension of a customer vCenter network into a cloud provider network using the L2VPN network extension functionality. Apologies that this post has been a bit delayed, turned out that I needed a VMware support request and a code update to vCloud Director 9.0.0.1 before I could get this functionality working. (I also had an issue with my lab environment which runs as a nested platform inside a vCloud Director environment and it turned out that the networking environment I had wasn’t quite flexible enough to get this working).

Update: an earlier version of this article didn’t include the steps to configure the L2 appliance settings in the vCloud Director Extender web interface – I’ve now added these to provide a more complete guide.

Links to the other parts of this series:
Part 1 – Overview
Part 2 – Cloud Provider / Service Provider installation and configuration (MyCloud)
Part 3 – Customer / Tenant installation and configuration (Tyrell)
Part 4 – Customer / Tenant connecting to a Cloud Provider and Virtual Machine migration (Tyrell)

I won’t deal with the use-case here that the customer already has NSX networking installed and configured, since in most cases you can simply create L2VPN networks directly between the customer and provider NSX Edge appliances and don’t really need to use the CX L2VPN functionality.

In order to be able to use the standalone L2VPN connectivity, the following pre-requisites are required:

  • A tenant vSphere environment with the vCloud Director Extender appliance deployed (it does not appear to be necessary to deploy the replication appliance if you only wish to use the L2VPN functionality, but obviously if you are intending to migrate VMs too you will need this deployed and configured as described in Part 3 of this series. In either case you will still need to register the cloud provider in the CX interface.
  • A configured vCloud Director VDC for the tenant to connect to. This environment must also have an Advanced Edge Gateway deployed with at least one uplink having a publicly accessible (internet) IP address. Note that you do not need to configure the L2VPN service on this gateway – the CX wizard completes this for you.
  • At least one OrgVDC network created as a subinterface on this edge gateway. The steps to create a suitable new OrgVDC network are detailed below.
  • Outbound internet connectivity to allow the standalone edge deployed in the tenant vCenter to communicate with the cloud-hosted edge gateway – only port 443/tcp is required for this.
  • Administrative credentials to connect to both the tenant vCenter and the cloud tenancy/VDC (Organization Administrator role is required).

Opening the tenant vCenter environment and selecting the ‘Home’ page shows the following:

Selecting the vCloud Director Extender icon opens the CX interface:

If you have not yet configured the L2 appliance settings, selecting the ‘DC Extensions’ tab will show the following error:

To fix this, open the vCloud Director Extender web interface in a browser by opening https://<ip address of deployed cx appliance>/ and log in, select the ‘DC Extensions’ tab:

Select the ‘Add Appliance Configuration’ option and complete the form to provide the deployment parameters where the standalone NSX edge appliance will be deployed:

The ‘Uplink Network Pool IP’ setting is a bit strange – it appears to be asking for a network pool or IP range, but the ‘help text’ in the field is asking for a single IP address. I found that the validation on this field is a bit odd – it will basically accept any input at all (even random strings) without complaining, but obviously deployment won’t work. What you need to do is add individual IPv4 addresses and click the ‘Add’ button for each. You will need 1 address for each stretched network you will be extending to your cloud platform. In this example I am only extending a single network so have added a single IPv4 address (192.168.0.201).

Once you click the ‘Create’ button you will be returned to the ‘DC Extensions’ tab and shown a summary of the L2 appliance configuration:

Note that there doesn’t appear to be any way to edit an existing L2 Appliance configuration, so if you need to change settings (e.g. to add additional uplink IP pool addresses) you will likely need to delete and recreate the entire entry.

 

Next we need to add a new ‘subinterface’ network to our hosted Edge gateway appliance, logging in to our cloud provider portal we can select the ‘Administration’ tab and the ‘Org VDC Networks’ sub-option, clicking the ‘Add’ button shows the dialog to create a new Org VDC Network. We need to select ‘Create a routed network by connecting to an existing edge gateway’ and then check the ‘Create as subinterface’ check box:

Next we configure the standard network information (Gateway, Network mask, DNS etc.) Since this network will be bridged to our on-premises network we can use the same details. Optionally a new Static IP pool can also be created so that new VMs provisioned in the cloud service can use this pool for their IP addresses. This won’t be an issue for VMs being migrated as they will carry across whatever IP addresses are already assigned to them. Note that the gateway address is set to be the same address as the existing (on-premises) gateway – this means that re-configuring the default gateway setting in the guest OS isn’t required either:

Now we supply a name for the new Org VDC network and optionally a description. The check box can also be used if the customer has multiple VDCs and wishes to share the new network across them:

Finally the summary screen allows us to check the information provided and go back and make any changes required if not correct. The most important setting is to make sure the network is attached to the edge gateway as a subinterface:

Once finished creating, the Org VDC network will be shown in the list with a type of ‘Routed’ and an interface type of ‘Subinterface’:

Next we access the vCloud Extender interface from within the customer vCenter plugin, selecting the ‘DC Extensions’ tab takes us to the following dialog:

Selecting ‘New Extension’ shows the dialog to create a new L2 extension, the fields are mostly populated for you. The ‘Enable egress’ allows you to select which gateway(s) will be allowed to forward traffic outside of the extended network. In this example I’ve only configured egress on the Source (on-premises) side through the existing gateway:

When you click ‘Start’, the status will go to ‘Connecting’ and a number of activities will take place in the customer vCenter:

Reading from the bottom (oldest) upwards, a new port group is created, an NSX Edge Standalone appliance is deployed and powered-on and the new port group is reconfigured once this has completed (ignore the VM migration task, that just happened to occur during the same time window in my lab). In this case the new NSX standalone edge was named ‘mcloudext-edge-4’ and the port group ‘mcxt-tpg-l2vpn-vlan-Tyrell-VDC15’.

Once deployment has completed (takes a few minutes) the vCloud Extender client interface shows the new DC extension network with a status of ‘Connected’:

In the tenant vCloud Director portal you can also see the status of the tunnel under ‘Statistics’ and ‘L2 VPN’ within the edge gateway interface:

You will now find that any VMs connected to the stretched network (OrgVDC network) in your cloud environment have L2 connectivity with the on-premises network and will continue to function as if they were still located in the customer’s own datacenter.

As I mentioned at the start of this post, I hit a number of issues when configuring this environment and getting it working took several attempts and a couple of rebuilds of my lab. The main issue was that in the initial release of vCloud Director v9.0.0.0 there is an issue that prevents the details required for the standalone NSX edge being deployed from being returned by the API. This prevents the deployment of the customer edge at all and resulted in my VMware support call. The specific issue is referenced in the vCloud Director 9.0.0.1 release notes  as ‘Resolves an issue where the vCloud Director API does not return a tunnelID parameter in response to a GET /vdcnetworks request sent against a routed Organization VCD network that has a subinterface enabled.’ As far as I can work out, it will be impossible to successfully use L2VPN in CX without upgrading the provider to vCloud Director 9.0.0.1 to resolve this issue.

The other issue I hit in my lab was that my hosted ‘Tenant Edge’ was NAT’d behind another NSX Edge gateway which was also performing NAT translation (Double-NAT). This was due to the way my lab is built in a nested environment inside vCloud Director. Unfortunately this meant the external interface of my hosted ‘Tenant Edge’ was actually an internal network address, so when the customer/on-premise edge tried to establish contact it was using an internal network address which obviously wasn’t going to work. I solved this by connecting a ‘real’ external internet network to my hosted Tenant Edge.

As always, comments and feedback always appreciated.

Jon.

vCloud Director Extender – Part 2 – Cloud Provider Setup

In the first part of this series of articles I described the new vCloud Director Extender (CX) software released by VMware. In this article I will show the steps required to install and configure the software from a Cloud Provider perspective. Included in this will be the necessary network and firewall configuration required.

vCloud Director Extender is supplied as a single .ova appliance from the VMware download site (login required). The download is located in the ‘Drivers & Tools’ section of the vCloud Director for Service Providers v9.0 page:

The ova file will generate the 3 different server components required to create a functional deployment:

CX Cloud Service The main vCloud Director Extender appliance, this is used to provide the UI for setup/configuration. This is the appliance initially deployed from the vCloud Director Extender appliance download package.
Cloud Continuity Manager (CCM) This component (also known as the ‘Replicator Manager’) is the operational manager of the deployment. CCM only runs in provider deployments and manages the replicator (CCE) appliances. CCM appliances are deployed and managed by the CX appliance (no additional download is required).
Cloud Continuity Engine (CCE) This component (also known as the ‘Replicator’) is the transfer engine that deals with data transfers between the customer and provider environments. CCE runs in both the provider and client environments. CCE appliances are deployed and managed by the CX appliance (no additional download is required).

The downloaded CX appliance is deployed from vCenter, the first selection allows you to specify the VM name and datacenter/folder location to deploy. In most service providers this would likely be the management cluster for their environment (as opposed to resource vcenters used for customer workloads)

Next you select which cluster/resource pool to deploy the CX appliance into:

A Review screen is presented which allows you to confirm the ova details:

And of course we have to read/accept the license agreement:

Next we select the datastore location for deployment:

And the internal network which the appliance will be connected to:

Make sure in the ‘Customize template’ screen (below) you change the ‘Deployment Type’ to ‘cx-cloud-service’ and don’t leave the default selection (cx-connector) selected as this will install the customer/tenant environment instead of the service provider configuration! The rest of the configuration options on this page are straightforward:

A summary screen is displayed showing a summary of the customization options selected, check these carefully as if they are wrong you’ll probably have to re-deploy from scratch:

Once the appliance is deployed, you will need to manually power it on from the vSphere client (or I did anyway – not sure if this is by design or not). Once it has booted and configured itself it will show the browser link to access to begin the environment configuration:

Note that if you open a page to just the hostname/IP address you’ll get an error, you must include the ‘/ui/mgmt’ suffix to the URL. You can now login with the ‘initial root login’ password you configured during the ova deployment. As you can see from the screen grab below I pre-configured DNS entries for the 3 provider components and used these wherever possible to avoid IP address confusion:

The main screen opens to the Setup Wizard, the tabs at the top of the screen allow you to easily navigate between sections, but these won’t show much until you complete the wizard:

Clicking on the ‘Setup Wizard’ opens a series of dialogs to provide the initial system configuration, first we have to specify the management vCenter authentication details. Note that the ‘Lookup Service URL’ as well as being optional also requires the path to the Platform Services Controller (PSC) if you are using external PSCs. The full path is truncated in this grab but should be https://<psc or vcenter with embedded psc address>/lookupservice/sdk:

The wizard includes very useful feedback at each step to show you if the previous actions have been successful or not, just click ‘Next’ through if everything is ok, or go back and fix the issue if not:

Now we need to provide a ‘system’ (administrator) level login to vCloud Director, you don’t need to specify the @system part of the user name here:

Again we get confirmation that we’ve successfully linked to vCloud Director and can continue with ‘Next’:

Next we can add the resource vCenters (where customer workloads actually run). In my lab environment this is the same vCenter that supports the management environment so the details are the same, but in production environments this will almost certainly be different. The setup wizard is intelligent enough to retrieve the names of any vCenter servers being used in Provider VDCs (pVDCs) in vCloud Director so for these you only need to ‘Update’.

When you click update you’ll be asked to provide administrator credentials to the resource vCenter environment. Be careful here as the default ‘Lookup Service URL’ will be set to the vCenter name, even if the vCenter is using an external Platform Services Controller (PSC) as mine was and will need to be manually edited to point to the PSC. This caught me out initially and I couldn’t work out why authentication to the resource vCenter was failing.

Once the resource vCenter(s) are authenticated they’ll show as ‘Registered’ in the wizard:

Next we need to configure the 2nd appliance configuration – this will be the ‘Replication Manager’ (also called the Cloud Continuity Manager / CCM in the documentation). We need to specify the parameters shown (the dialog scrolls down and also asks for default gateway address, DNS server address and netmask).

The wizard will now deploy and start up the replication manager appliance on the vCenter specified. If the networking information is incorrect the process will stall at this point as the wizard relies on establishing network connectivity with the replication manager before continuing. A status update is given at the top of the dialog as the appliance is deployed and started up. Once the replication manager appliance is running and seen on the network you’ll see the success message:

Next the replication manager appliance must be ‘activated’ by setting the password for the root user and the ‘Public Endpoint URL’. Make sure you set this to the correct external (public) IP address that your customers will be using to connect to your CX environment. I haven’t found any way yet to alter this setting after deployment if specified incorrectly without deleting the entire CX environment and starting over (the xx’s in this grab are simply to hide the real internet addressing I was using – I’m also pretty sure I eventually used the default port of 8044 for this public URL):

If everything has gone ok, you’ll get the screen below showing that the replication manager deployment has succeeded and you can move on to the replicator configuration:

The deployment details for the Replicator are specified next – the wizard helpfully copies across some of the settings from the Replication Manager deployment, but you still need to specify the (unique) IP and Netmask details:

The Replicator appliance will now be deployed in vCenter in exactly the same way as the Replication Manager was previously. Once it becomes available on the network the wizard will detect this and show the screen below:

Next we have to ‘Activate’ the Replicator appliance by completing the settings shown below to authenticate to the resource vCenter which this Replicator will be responsible for.

If everything worked ok you’ll get a ‘Successfully Activated’ message:

Clicking ‘Next’ takes you to the ‘Complete’ screen and shows that if you have additional Resource vCenters you’ll need to deploy additional Replicator appliances for these (1 per vCenter):

Clicking through the tabs in the management UI should now show that all the required CX components are now deployed and registered. The ‘Cloud Resoures’ tab shows linked vCloud Director instances and resource vCenters:

The ‘Replication Manager’ tab shows the deployed Replication Manager appliance:

Th ‘Replicators’ tab shows the deployed Replicator appliance(s) – 1 per resource vCenter if you have multiples of these.

That completes the appliance installation and initial configuration, next you will need to configure appropriate NAT/firewall rules so that customers on the internet can connect to your new CX service!

Assuming that you wish to use a single external (public) Internet IP address for the entire CX service, the configuration is a little tricky since traffic will need to be directed to either the CX, Replication Manager or Replicator appliance depending on what port it is attempting to aceess. The NAT/Firewall rules that I worked out from the documentation and found that worked are:

Source Address Destination Destination Port/Protocol Translated Port/Protocol Translated Internal Address
External (Internet) CX Service Public IP Address 443/tcp 443/tcp CX (vCD Extender) appliance internal address
External (Internet) CX Service Public IP Address 8044/tcp 8044/tcp Replication Manager appliance internal address
External (Internet) CX Service Public IP Address 44045/tcp 44045/tcp Replicator appliance internal address

Also note that if you restrict outbound internet traffic from your CX network you will also need to permit the following traffic in an Outbound direction:

Source Destination Source Port/Protocol Destination Port/Protocol Description
CX Server Network External (Internet) Any 443/tcp Required for CX to be able to communicate with customer Replicator management interface
CX Server Network External (Internet) Any 44045/tcp Required for CX to be able to communicate with customer Replicator data interface

In the next part of this series of articles I’ll continue with the installation and configuration of the CX components required on the customer / tenant site.

Link back to Part 1 || Link to Part 3

As always, corrections, comments and feedback are always appreciated.

Jon.

vCloud Director 8.20 Edge Gateway Roles

One of the key changes in vCloud Director 8.20 and 8.20.1 from 8.10 is the Advanced Networking for Edge Gateways, this allows customer control of several advanced networking features of the Edge Gateways which previously could not be made available to tenant administrators. vCloud Director 8.20 and later also change the Roles to be per-tenant organisation (rather than globally shared between all tenants). However, in order for tenant administrators to be able to take advantage of the new features, the new Edge Gateway roles need to be added to their organisation. The only way currently to achieve this is by the vCloud REST API and must be performed separately for each organisation in the vCloud infrastructure.

Here is what the available rights looks like prior to the change being made – note there is no ‘Gateway Advanced Services’ section at all:

Since manually modifying the OrgRights XML is time-consuming and a bit prone to error, I set about writing a PowerCLI script to make the change automatically for a given organisation. Note that this change does not alter the defined roles for an organisation, it simply adds the new Edge Gateway permissions as available entities which can then be selectively added to roles.

Once the script has been run for an organisation, editing the properties of a role allows the new Gateway Advanced Services entities to be selected for that role:

The script is included below, as always I welcome any thoughts/comments/feedback.

Jon

Using Independent Disks in vCloud

Yesterday I wrote about the PowerShell module I’ve written (CIDisk.psm1) to allow manipulation of independent disks in a vCloud environment. This post shows some usage options and details some of the caveats to be aware of when using disks in this manner.

My test environment has two VMs (named imaginatively ‘vm01’ and ‘vm02’), and the VDC they are in has access to four different storage profiles (‘Platinum’, ‘Gold’, ‘Silver’ and ‘Bronze’ storage). The default storage policy for the VDC is ‘Bronze’, but what if we want to create independent disks on other profiles? The -StorageProfileHref parameter to New-CIDisk lets us do this. Once connected to our cloud (Connect-CIServer) we can find the Hrefs of the available storage profiles we can use:

Let’s create 2 independent disks, a 10G disk on ‘Platinum’ storage and a 100G disk on ‘Silver’ storage:

We can see in the vCloud interface that these disks now exist in our VDC (Note: you may have to completely refresh your vCloud session using your browser’s refresh before the ‘Independent Disks’ tab appears):

There are no context actions for these disks though and we can’t attach/detach them to VMs in the vCloud interface.

Our VM01 virtual machine currently has a 40GB base disk attached and no other storage:

 

We can mount both our new independent disks to this VM using the following:

Looking at the VM01 Hardware tab following this shows both disks mounted:

Note again that no manipulation options are available in the vCloud UI, but at least it’s obvious that independent disks have been attached to VM01.

After rescanning storage in the guest, we can see the new storage devices on VM01:

And once these are brought online, initialized, storage volumes created and drive letters assigned, we can use the disks inside the guest (the volume names don’t get automatically mapped – I’ve just named the volumes the same as the independent disk objects for consistency):

At this point everything appears to be working fine, but there can be a catch here – if you restart the virtual machine you may find that the server attempts to boot from one of the newly mounted independent disks. Luckily vCloud Director 8.10 allows us to get into the VM BIOS and change the boot order settings:

Once restarted into BIOS we can select the correct boot order:

With the server restarted, we can create some test content in ‘disk01-plat’ to prove that the data moves when we reattach this disk to VM02:

And to dismount ‘disk01-plat’ from VM01 and mount it to VM02 we can:

Looking at the available storage in VM02 after a disk rescan shows our disk has transfered across:

Finally, checking the contents of the ‘E:\’ drive shows our test folder & file have made it across:

And Get-CIDisk can be used to verify the disk attachments after moving disk01 to VM02:

Hopefully this gives a better idea of how CIDisk can be used to manage independent disks in a vCloud environment, it would be nice if VMware included the management functions in the UI, but for now at least you can use PowerShell to easily achieve the same results without having to write against the API directly.

As always, any comments / feedback greatly appreciated.

Jon

Uploading / running utilities directly on ESXi hosts

As part of planning our upgrade from VMware NSX-V from v6.2.2 to v6.2.4 we became aware of the VMware issue KB2146171 (link) which can cause VMs to lose network connectivity when vMotioned to other hosts following the upgrade. Obviously wishing to avoid this for our own (and customer) VMs, we raised a support case to obtain the VMware script to determine how many of our VMs (if any) were going to be affected. Unfortunately the VMware script we were supplied was configured to run *after* the upgrade had already been completed. Fortunately the VMware utility supplied (vsipioctl – a binary to be run directly on ESXi hosts) could tell us which VMs were affected prior to upgrading.

Since we have a reasonably large number of hosts and hosted VMs I set about writing some PowerShell to perform the following actions:

  • Connect to vCenter and enumerate all ESXi hosts.
  • Enable SSH access to each host in turn.
  • Upload the VMware vsipioctl utility to the host /tmp/ folder and make it executable.
  • Run vsipioctl and parse the return information.
  • Build a table / CSV of all VM network interfaces with the results of the vsipioctl utility.
  • Disable SSH on the hosts once done and move on to the next host.

At first I tried using PuTTY plink.exe and pscp.exe from PowerShell to perform the SSH and SCP file copy to the hosts, but had serious problems passing the right password & command line options due to the way PowerShell escapes quoted strings. In the end I found it easier to use the PoshSSH PowerShell library (https://github.com/darkoperator/Posh-SSH) for these functions rather than shelling out to PuTTY executables.

Note that we usually leave SSH access disabled on our ESXi hosts, so the script shown enables this and then re-disables SSH after running – adjust if necessary when using in your own environments.

If you need to run this check for your own environment you will still need to open a VMware support call to obtain the vsipioctl binary as far as I am aware as I don’t believe this is available any other way.

The script is shown below – hopefully this will be useful for some of you, just make sure you test properly before running against a production environment. Luckily in our case the script proved that none of our VMs are impacted by this issue and we can safely proceed with our NSX-V upgrade.

Jon.

 

 

 

 

Installing Microsoft Azure Stack TP2 on VMware ESXi

This week Microsoft released Technical Preview 2 (TP2) of their ‘Cloud in a box’ Azure Stack product. This is scheduled for release in mid-2017 to allow enterprises and service providers to run Azure consistent services from their own datacenters.

TP2 has a number of additional features over TP1 released earlier this year, but doesn’t support installation as a virtual machine. The hardware requirements are detailed here but basically you’re going to need a reasonably good spec server with enough local hard disks to be able to install it.

As I had good success running the previous TP1 release of Azure Stack in a virtual machine I thought I’d see if the same could be done with TP2. As with TP1, installation is only supported for a single machine node (clustered multi-node deployments are likely to come with TP3).

Of course installing TP2 as a VM is completely unsupported by both Microsoft and VMware so please don’t bug them with any issues – since TP2 is definitely not for production use this shouldn’t be a huge concern.

After several failed attempts I finally worked out a method to allow installation of TP2 as a virtual machine using VMware ESXi 6.0 Update 2 as the hypervisor platform. The key is in building the host virtual machine correctly and in modifying a couple of places in the installation PowerShell scripts to bypass the checks for physical hardware.

To start, create a new virtual hardware v11 Windows VM with appropriate sizing (I used a 200GB system disk, 128GB of RAM and 12 CPU cores configured as a single socket / 12 cores arrangement).

I then made the following changes:

  • Add a new SCSI host bus adapter and set the ‘bus sharing’ for this adapter to ‘Physical’ – this is required to allow the Storage Spaces Direct (S2D) configuration in the Azure Stack installer to correctly configure clustered storage.
  • Make sure that ‘Expose hardware assisted virtualization to the guest OS’ option is enabled to allow the VM to run the Hyper-V role and nested VMs.
  • Add 4 new virtual hard disks of at least 150GB size each (I used 200GB for each disk again) and configure these as ‘Thick provisioned eager zeroed’ and make sure they are attached to the new (physical bus sharing) SCSI adapter.
  • Use a single VMXNET3 network adapter connected to a network that has a DHCP server available on it.
  • Change portgroup security for the network to which the VM is attached to allow ‘Promiscuous Mode’, ‘MAC address changes’ and ‘Forged Transmits’.
  • Set the VM to boot to BIOS on next power up and when it boots make sure to set the BIOS date/time to match your current timezone date/time.

Next power on and install a base operating system on the VM (I used Server 2012 R2, but it really doesn’t matter as this environment is only used to bootstrap the installation process).

Once the server is running, download  and unpack Azure Pack TP2 and move the extracted ‘CloudBuilder.vhdx’ file to the root of the C:\ drive. Following the Microsoft instructions to download and run the ‘PrepareBootFromVHD.ps1’ script which will reconfigure the VM to boot from the CloudBuilder.vhdx file and restart the VM.

Note: Depending on disk speed It can take a considerable time to extract CloudBuilder.vhdx from the TP2 archive – you might want to keep a copy of it elsewhere on your network (or on the VM disk if you have space) in case you need to restart the installation from scratch.

Once the VM is up and running from the TP2 CloudBuilder vhdx image, make the following changes:

  • Install VMware Tools (required to add the VMXNET3 network driver) and restart when prompted. – See note below, E1000 network adapter may be a better choice.
  • (Optional) Rename the computer and restart when prompted.
  • (Optional) Change the VM’s IP address to a static IPv4 address (rather than just using DHCP) so you can easily locate it on the network later – note that DHCP is still a requirement for the other VMs unless you use the Microsoft documented installer switches to allow use of a static addresses.
  • Make sure that the date/time and timezone are set correctly and match the VM BIOS setting (Can’t stress this enough, I had at least 3 failed installation attempts due to date/time problems).
  • Make the following changes to the C:\CloudDeployment\Roles\PhysicalMachines\Tests\BareMetal.Tests.ps1 file:

Line 376:
Change:
$physicalMachine.IsVirtualMachine | Should Be $false
To:
$false | Should Be $false

Line 453:
Change:
($physicalMachine.Processors.NumberOfEnabledCores | Measure-Object -Sum).Sum | Should Not BeLessThan $minimumNumberOfCoresPerMachine
To:
12 | Should Not BeLessThan $minimumNumberOfCoresPerMachine

Then save the file. (The second change should not be necessary if you’ve built the VM with at least 12 cores, but the installation script appears to detect the number of physical cores as ‘0’ in a VM so this is required).

You should now be able to run the CloudDeployment\Configuration\InstallAzureStackPOC.ps1 script and everything should work…..

The installation process will take a considerable time, but hopefully if you’ve configured everything correctly you’ll have a working Azure Stack TP2 installation at the end with all of the required infrastructure servers running as Hyper-V guests within the VM.

NOTE: I hit an issue with installation failing at step 60.61.93 and thought this was related to installing in a VM, but it appears this is a more general issue with TP2 installation – see this MSDN thread for possible solutions if you encounter this error. If you encounter any issues with the installation I also recommend following the troubleshooting advice here.

Best of luck trying this out for yourselves!

Update 7th Oct 2016: If you’re having issues with guest (nested Hyper-V) VMs crashing, try using the E1000 network adapter for the host instead of VMXNET3, I’ve been doing some testing with this and E1000 may be a better option and prevent this occurring.

Jon

Live import VMs to vCloud Director

Tom Fojta wrote a great blog post about the new capability in vCloud Director 8.10 to import running VMs into vCloud Director. This is a huge asset in migration scenarios where customers can’t afford outages when being migrated into the vCD environment. Unfortunately the API syntax to actually initiate the import is a little convoluted and not the easiest process to manage.

I set about writing a PowerShell script to significantly simplify the process of initiating a live-import operation. The script itself is available from github at the following link: https://github.com/jondwaite/vcdliveimport.

The liveimport.ps1 script contained in this repository does the following:

  • Prompts for a credential to be used to connect to both vCloud Director (System context) and vCenter – if you have different usernames/passwords for each you’ll need to adjust this.
  • Enumerates the available vCenter instances registered as Provider Virtual Datacenters (PVDCs) in vCloud Director and allows one to be selected as the source vCenter for the migration.
  • Lists the available VMs in the selected vCenter instance, filters this list based on selectable criteria (e.g. don’t offer to import ‘Guest Introspection’ VMs) and allows the source VM to be selected.
  • Lists available destination Virtual Datacenters (VDCs) in the vCloud Director environment and allows the destination VDC to be selected.
  • Displays the appropriate POST request information to be submitted to vCloud Director to initiate the live-import of this VM.
  • Optionally – Submits the REST API request directly to the vCloud Director environment to actually initiate the import process.

An example transcript of this process is show below. Hopefully this helps someone else out and helps to make it easier for you to live-import running VMs into vCloud Director.

Jon.

Example Session Transcript:

 

Create an empty vApp in vCloud Director

Sometimes you just need to create a new vApp with no contents at all – maybe for testing, or maybe you want to populate it with VMs built ‘from scratch’ rather than cloned from templates. This is easy to do in the vCloud Director web UI – you just skip the addition of any VM templates or new VMs and can easily create empty vApps, but how about programatically?

The VMware documentation is remarkably slim in this regard – all the documented methods I could find for vApp creation require either cloning from existing vApp templates, from existing VMs or from uploaded OVF files.

So how do we create a brand-new empty vApp? Turns out it’s pretty simple – once you discover the ‘composeVApp’ method on an Organization VDC supports creation of empty vApps.

If using the REST API we can simply create an XML body document of type ‘composeVAppParams’ and submit it against the OrgVDC’s /action/composeVapp link.

An example XML document body could be:

<?xml version=”1.0″ encoding=”UTF-8″?>
<ComposeVAppParams
name=”MyEmptyVapp”
xmlns=”http://www.vmware.com/vcloud/v1.5″
xmlns:ovf=”http://schemas.dmtf.org/ovf/envelope/1″>
<Description>My vApp Description</Description>
<AllEULAsAccepted>true</AllEULAsAccepted>
</ComposeVAppParams>

We then ‘POST’ this document body to the link: ‘https://<Cloud Server DNS name or IP address>>/api/vdc/<ID of our VDC>/action/composeVApp’ not forgetting to add a header of ‘Content-Type: application/vnd.vmware.vcloud.composeVAppParams+xml’ to the POST request.

If we want to accomplish the same thing using PowerShell / PowerCLI it’s easy too (once connected to our cloud using Connect-CIServer):

$vapp = New-Object VMware.VimAutomation.Cloud.Views.ComposeVAppParams
$vapp.Name = “MyEmptyVapp”
$vapp.Description = “My vApp Description”
$myorgvdc = Get-OrgVdc -Name ‘My OrgVDC Name’
$myorgvdc.ExtensionData.ComposeVApp($vapp)

No idea if this is ‘officially’ supported or not – so use at your own risk and be aware that the implementation could change in a future release and break this (although I’d be surprised as this is almost certainly the action that the vCD web UI is submitting ‘behind the scenes’ when you manually create an empty vApp).

Jon.

Client Integration Plugin madness

One of the frustrations dealing with the vSphere Web Client has always been the requirement for a browser plugin to import/export OVF templates. In vSphere 6 and vCloud Director 8 this has reached a whole new level of frustration. The issue is that both vSphere 6 and vCloud Director 8 offer to download and install a package called ‘VMware-ClientIntegrationPlugin-6.0.0.exe’. In an ideal world this plugin once correctly installed would work for the OVF import/export/upload/download functionality in both products… right?

Meanwhile in this world, although the package names are identical, the functionality is not – if you’ve installed the vSphere version then you can’t upload/download OVFs or ISOs in vCloud Director and if you’ve got the vCD variant installed then vSphere OVF import/export doesn’t work. Uninstalling and reinstalling the ‘correct’ version fixes the problem (until you need the ‘other’ one again), but can be easier said than done – particularly in the case of shared desktop server administration environments where other users having a browser session open will prevent reinstalling browser plugins.

So how do you tell the two packages apart? In the current releases of vSphere 6.0Update1 and vCloud Director v8.0 the packages are significantly different sizes:

vCloud Director 8 vSphere 6.0 Update 1
Package Filename VMware-ClientIntegrationPlugin-6.0.0.exe VMware-ClientIntegrationPlugin-6.0.0.exe
File Version 11.0.0.2826 10.0.0.3637
Product Version 6.0.0.2826 6.0.0.3637
File Size 48.8 MB 94.9 MB

So the easiest way to tell is the smaller 48.8 MB file is vCD and the larger 94.9 MB one is vSphere.

If (as I do) you often need to use both versions then maybe consider setting up separate management desktops (or virtual apps) for each so you can easily reach one that’s going to work for you.

Hopefully VMware will fix this in a future release and provide a single integration plugin that works across both products.

Update – 18th March 2016
VMware have just released vCloud Director for Service Providers v8.0.1 (http://pubs.vmware.com/Release_Notes/en/vcd/801/rel_notes_vcloud_director_801.html) which appears to have reverted the vCD Client Integration product to version 5.6.0 – there is also mention in the release notes on the possible clashes between vSphere and vCD client integration toolsets so it appears that VMware are at least aware of the issue.