Independent Disks in vCloud via PowerCLI

Another day, another customer requirement which I figured ‘this will be easy’ and turned out not to be quite so easy…

The customer in question is a tenant on our cloud platform and has built a VM to be their offline root Certificate Authority (CA). In line with their security practice, this VM has no network connectivity and is usually powered-off in their environment unless specifically required to issue or renew certificates.

They asked if there was an easy way to transfer certificate files issued by this VM to other servers in their infrastructure. In their (old) vSphere environment they would simply attach a new temporary virtual disk to the VM, copy the certificate files over and then attach the disk to the destination VM. Surely there had to be some similar functionality in vCloud Director?

Well, there’s a bit of good and bad news on that…

By default disks in vCloud Director are assigned (permanently) to a VM, they can’t be moved to different VMs. (That’s the bad news). The good news is that vCD supports ‘independent disks’ which can be moved between VMs. The bad news is that this is an API-only operation (nothing in the web UI allows creation or manipulation of Independent disks, although you can see them if they exist). The worst news is that VMware PowerCLI even in the latest 6.5R1 version doesn’t have any cmdlets to manipulate independent disks attached to vCloud VMs either.

So while I could have hacked something together to run directly against the vCloud Director REST API for this customer, I figured it would be better to have some reusable PowerShell cmdlets for this. So I set about writing some and I’m pleased to announce the first release of ‘CIDisk’, a collection of PowerShell cmdlets to manipulate independent disks in vCloud Director environments.

The module code, documentation and examples are now available on my github at https://github.com/jondwaite/cidisk

I’ll do a followup post detailing some more advanced options and scenarios in the next day or two.

Edit – Followup post is now available here.

As always I appreciate any/all feedback and hope someone else finds these useful.

Jon

Detailed VM Storage Information in vCloud Director

I recently had a request from one of our customers who wanted an easy / scriptable method to determine the storage allocations on their hosted VMs in our vCloud platform, preferably from PowerShell. That should be easy I thought and set about my usual Google-based research. I initially found this post from Alan Renouf which I forwarded back to the client.

Unfortunately, while this achieved part of the answer, this particular customer had a number of VMs which had hard disks attached using multiple/different storage profiles and they wanted to get the details of these too. So I set about writing some code to see if I could get full storage information about the VM and all of its disks. I ended up having to access the vCloud REST API directly for this information but it wasn’t too bad.

First, I created a ‘worst-case’ test VM where the 3 attached hard disks which were created one each on our ‘Gold’, ‘Silver’ and ‘Bronze’ storage policies:

test02-hardware-properties

(Just to make sure everything would work I also created the 3 disks on 3 different storage Bus Types). I also set the VM storage policy to something different:

test02-general-properties

My first step was a function to access the vCloud REST API, I found this post from Matt Vogt’s blog which had some code for this which I shamelessly borrowed (hey, why reinvent the wheel unless you need to):

The return from the Get-CIVM cmdlet includes a reference to the VM object within the vCloud API:

Using this we can obtain our disk information:

Filtering the returned RasdItemsList for a ResourceType of 17 (Hard Disk), we can get a list of attached hard disks:

So this gets us to a point where we have all of the hard disk information, but how do we find the storage policy for each disk? It turns out that each disk has an attribute ‘HostResource’ which provides the URI to the storage policy from which the disk has been allocated:

So how can we convert the storageProfileHref values into meaningful (human readable) storage profile names? We can use another API call to establish the name of each vdcStorageProfile:

Querying the API for every vdcStorageProfile for every disk is going to generate a lot of calls for any significant number of VMs, so in the code below I’ve added a hash stored in a global variable which caches these results so that any storageProfileHref which has been seen before doesn’t need to generate an additional API call.

Putting it all together

So we now have a way of determining all of the information we need, using PowerShell custom objects allows us to write a function which returns all of our VM and storage details in a easily consumable form for further processing.

The script included at the bottom of this article produces the following output for my test environment containing 2 VMs of which the ‘pxetest01’ VM has no disks attached:

It can also return just the disk information as another custom object:

And we can check the number of disks attached to any VM:

Finally because the output is a PowerShell object, we can easily turn this custom object into JSON for use in further processing:

Hopefully you’ve found this post useful, let me know in the comments if you have any issues or would like to see more examples like this.

Jon.

Full script to find storage policy information for vCloud VMs using the vCloud REST API:

 

Uploading / running utilities directly on ESXi hosts

As part of planning our upgrade from VMware NSX-V from v6.2.2 to v6.2.4 we became aware of the VMware issue KB2146171 (link) which can cause VMs to lose network connectivity when vMotioned to other hosts following the upgrade. Obviously wishing to avoid this for our own (and customer) VMs, we raised a support case to obtain the VMware script to determine how many of our VMs (if any) were going to be affected. Unfortunately the VMware script we were supplied was configured to run *after* the upgrade had already been completed. Fortunately the VMware utility supplied (vsipioctl – a binary to be run directly on ESXi hosts) could tell us which VMs were affected prior to upgrading.

Since we have a reasonably large number of hosts and hosted VMs I set about writing some PowerShell to perform the following actions:

  • Connect to vCenter and enumerate all ESXi hosts.
  • Enable SSH access to each host in turn.
  • Upload the VMware vsipioctl utility to the host /tmp/ folder and make it executable.
  • Run vsipioctl and parse the return information.
  • Build a table / CSV of all VM network interfaces with the results of the vsipioctl utility.
  • Disable SSH on the hosts once done and move on to the next host.

At first I tried using PuTTY plink.exe and pscp.exe from PowerShell to perform the SSH and SCP file copy to the hosts, but had serious problems passing the right password & command line options due to the way PowerShell escapes quoted strings. In the end I found it easier to use the PoshSSH PowerShell library (https://github.com/darkoperator/Posh-SSH) for these functions rather than shelling out to PuTTY executables.

Note that we usually leave SSH access disabled on our ESXi hosts, so the script shown enables this and then re-disables SSH after running – adjust if necessary when using in your own environments.

If you need to run this check for your own environment you will still need to open a VMware support call to obtain the vsipioctl binary as far as I am aware as I don’t believe this is available any other way.

The script is shown below – hopefully this will be useful for some of you, just make sure you test properly before running against a production environment. Luckily in our case the script proved that none of our VMs are impacted by this issue and we can safely proceed with our NSX-V upgrade.

Jon.

 

 

 

 

Installing Microsoft Azure Stack TP2 on VMware ESXi

This week Microsoft released Technical Preview 2 (TP2) of their ‘Cloud in a box’ Azure Stack product. This is scheduled for release in mid-2017 to allow enterprises and service providers to run Azure consistent services from their own datacenters.

TP2 has a number of additional features over TP1 released earlier this year, but doesn’t support installation as a virtual machine. The hardware requirements are detailed here but basically you’re going to need a reasonably good spec server with enough local hard disks to be able to install it.

As I had good success running the previous TP1 release of Azure Stack in a virtual machine I thought I’d see if the same could be done with TP2. As with TP1, installation is only supported for a single machine node (clustered multi-node deployments are likely to come with TP3).

Of course installing TP2 as a VM is completely unsupported by both Microsoft and VMware so please don’t bug them with any issues – since TP2 is definitely not for production use this shouldn’t be a huge concern.

After several failed attempts I finally worked out a method to allow installation of TP2 as a virtual machine using VMware ESXi 6.0 Update 2 as the hypervisor platform. The key is in building the host virtual machine correctly and in modifying a couple of places in the installation PowerShell scripts to bypass the checks for physical hardware.

To start, create a new virtual hardware v11 Windows VM with appropriate sizing (I used a 200GB system disk, 128GB of RAM and 12 CPU cores configured as a single socket / 12 cores arrangement).

I then made the following changes:

  • Add a new SCSI host bus adapter and set the ‘bus sharing’ for this adapter to ‘Physical’ – this is required to allow the Storage Spaces Direct (S2D) configuration in the Azure Stack installer to correctly configure clustered storage.
  • Make sure that ‘Expose hardware assisted virtualization to the guest OS’ option is enabled to allow the VM to run the Hyper-V role and nested VMs.
  • Add 4 new virtual hard disks of at least 150GB size each (I used 200GB for each disk again) and configure these as ‘Thick provisioned eager zeroed’ and make sure they are attached to the new (physical bus sharing) SCSI adapter.
  • Use a single VMXNET3 network adapter connected to a network that has a DHCP server available on it.
  • Change portgroup security for the network to which the VM is attached to allow ‘Promiscuous Mode’, ‘MAC address changes’ and ‘Forged Transmits’.
  • Set the VM to boot to BIOS on next power up and when it boots make sure to set the BIOS date/time to match your current timezone date/time.

Next power on and install a base operating system on the VM (I used Server 2012 R2, but it really doesn’t matter as this environment is only used to bootstrap the installation process).

Once the server is running, download  and unpack Azure Pack TP2 and move the extracted ‘CloudBuilder.vhdx’ file to the root of the C:\ drive. Following the Microsoft instructions to download and run the ‘PrepareBootFromVHD.ps1’ script which will reconfigure the VM to boot from the CloudBuilder.vhdx file and restart the VM.

Note: Depending on disk speed It can take a considerable time to extract CloudBuilder.vhdx from the TP2 archive – you might want to keep a copy of it elsewhere on your network (or on the VM disk if you have space) in case you need to restart the installation from scratch.

Once the VM is up and running from the TP2 CloudBuilder vhdx image, make the following changes:

  • Install VMware Tools (required to add the VMXNET3 network driver) and restart when prompted. – See note below, E1000 network adapter may be a better choice.
  • (Optional) Rename the computer and restart when prompted.
  • (Optional) Change the VM’s IP address to a static IPv4 address (rather than just using DHCP) so you can easily locate it on the network later – note that DHCP is still a requirement for the other VMs unless you use the Microsoft documented installer switches to allow use of a static addresses.
  • Make sure that the date/time and timezone are set correctly and match the VM BIOS setting (Can’t stress this enough, I had at least 3 failed installation attempts due to date/time problems).
  • Make the following changes to the C:\CloudDeployment\Roles\PhysicalMachines\Tests\BareMetal.Tests.ps1 file:

Line 376:
Change:
$physicalMachine.IsVirtualMachine | Should Be $false
To:
$false | Should Be $false

Line 453:
Change:
($physicalMachine.Processors.NumberOfEnabledCores | Measure-Object -Sum).Sum | Should Not BeLessThan $minimumNumberOfCoresPerMachine
To:
12 | Should Not BeLessThan $minimumNumberOfCoresPerMachine

Then save the file. (The second change should not be necessary if you’ve built the VM with at least 12 cores, but the installation script appears to detect the number of physical cores as ‘0’ in a VM so this is required).

You should now be able to run the CloudDeployment\Configuration\InstallAzureStackPOC.ps1 script and everything should work…..

The installation process will take a considerable time, but hopefully if you’ve configured everything correctly you’ll have a working Azure Stack TP2 installation at the end with all of the required infrastructure servers running as Hyper-V guests within the VM.

NOTE: I hit an issue with installation failing at step 60.61.93 and thought this was related to installing in a VM, but it appears this is a more general issue with TP2 installation – see this MSDN thread for possible solutions if you encounter this error. If you encounter any issues with the installation I also recommend following the troubleshooting advice here.

Best of luck trying this out for yourselves!

Update 7th Oct 2016: If you’re having issues with guest (nested Hyper-V) VMs crashing, try using the E1000 network adapter for the host instead of VMXNET3, I’ve been doing some testing with this and E1000 may be a better option and prevent this occurring.

Jon

Live import VMs to vCloud Director

Tom Fojta wrote a great blog post about the new capability in vCloud Director 8.10 to import running VMs into vCloud Director. This is a huge asset in migration scenarios where customers can’t afford outages when being migrated into the vCD environment. Unfortunately the API syntax to actually initiate the import is a little convoluted and not the easiest process to manage.

I set about writing a PowerShell script to significantly simplify the process of initiating a live-import operation. The script itself is available from github at the following link: https://github.com/jondwaite/vcdliveimport.

The liveimport.ps1 script contained in this repository does the following:

  • Prompts for a credential to be used to connect to both vCloud Director (System context) and vCenter – if you have different usernames/passwords for each you’ll need to adjust this.
  • Enumerates the available vCenter instances registered as Provider Virtual Datacenters (PVDCs) in vCloud Director and allows one to be selected as the source vCenter for the migration.
  • Lists the available VMs in the selected vCenter instance, filters this list based on selectable criteria (e.g. don’t offer to import ‘Guest Introspection’ VMs) and allows the source VM to be selected.
  • Lists available destination Virtual Datacenters (VDCs) in the vCloud Director environment and allows the destination VDC to be selected.
  • Displays the appropriate POST request information to be submitted to vCloud Director to initiate the live-import of this VM.
  • Optionally – Submits the REST API request directly to the vCloud Director environment to actually initiate the import process.

An example transcript of this process is show below. Hopefully this helps someone else out and helps to make it easier for you to live-import running VMs into vCloud Director.

Jon.

Example Session Transcript:

 

Working with the vCloud API and PHP SDK – Part 1

Introduction

While I love using the PowerCLI tools for manipulating vCloud Director, sometimes you need to perform actions that require hitting the API directly. Tools such as the RESTClient plugin for Firefox, cURL and HTTPie from the command line are good for interactive manipulation of the API, but what if you want to automate these API interactions?

Fortunately, rather than having to reinvent the wheel, VMware publish a variety of SDK’s (currently for PHP, Java and Microsoft .NET) which make this (relatively) straightforward, although sadly the documentation for these are lacking in basic configuration information which makes actually using them more problematic than it should be. They are still a better alternative than writing code directly against the HTTP API where you have to deal with decoding and re-encoding the XML objects used by the vCloud API itself directly.

In this series I’ll concentrate on the VMware PHP SDK for vCloud Director (PHP SDK). This first post will cover how to install and configure it. Once we have a working environment configured the following articles in this series will detail some (hopefully) useful scripts which use the PHP SDK to perform automation tasks against the vCloud API.

Quick note on VMware PHP SDK versions

The link to download the PHP SDK for vCloud is on VMware’s site here, however if you work for a VMware Service Provider and have access to the vCloud Director for Service Providers code you’ll find a more recent version as follows:

Log in to your ‘My VMware‘ account and select the ‘View & Download Products’ section. From the list select the ‘Download’ link against the ‘vCloud Director for Service Providers’ product.

Next select the ‘Drivers & Tools’ tab and expand the ‘Automation Tools and SDK(s)’ section, then select the Download link against the ‘VMware vCloud SDKs for Service Providers’ item. This will take you to a page where you can download a later (v8.0.0 build 3010704 currently) version of the PHP SDK. (The direct link for this is here, but I believe will only work if you have a valid Service Provider login for My VMware).

Download the .zip or .tar.gz version of the PHP SDK that suits your development environment, as far as I can tell the contents is identical between both versions so ease of unarchiving is the only difference. For the purposes of this series of posts it shouldn’t matter which version of the SDK (5.5 or 8.0) you use.

None of this makes sense to me – I know the vCloud Director product past version 5.5 is a service provider only offering, but since clients of vCloud Powered service providers are just as entitled to use the API as service providers then surely VMware should make the latest SDK available to everyone?

Configuring your devlopment environment

There are 5 components involved in setting up a functioning development environment with the vCloud PHP SDK:

  • A working PHP installation (No web server required).
  • The PEAR modules ‘HTTP2_Request’ and ‘Log’.
  • The PHP extensions ‘openssl’ and ‘mbstring’ added and enabled.
  • The VMware downloaded PHP SDK files.
  • A Configuration.ini file to control SDK logging.

Unfortunately only the first 2 of these are (partially) covered in the VMware documentation. The following sections detail how to configure a working development environment for all of these.

The documentation included on VMware’s site for the PHP SDK download mentions that the ‘Pear HTTP_Request2’ package is required to use the PHP SDK, unfortunately it doesn’t mention that 2 additional extensions are also required (PEAR Log and the PHP mbstring). Without these additional packages you will either not be able to use the SDK at all, or receive strange error messages.

PHP installation on Linux

If you are using a Linux platform with a package manager you will usually find a packaged PHP distribution available. On most CentOS systems this can be installed with:

$ sudo yum install php

On other Linux distributions the commands will vary so check the documentation for your particular environment, the PHP Documentation has good installation instructions for a variety of platforms. You will also require Pear (PHP Extension and Application Repository framework) in order to be able to install the support packages required by the SDK. Again on CentOS this can be installed with:

$ sudo yum install php-pear

Adding the required PHP extensions on Linux

Your Linux distribution should have an available package to install the mbstring (Multibyte Character support) extension for PHP, e.g. for CentOS Linux:

$ sudo yum install php-mbstring

To install the PEAR modules required for the vCloud PHP SDK:

$sudo pear install HTTP_Request2 Log

PHP Installation on Windows

On Windows systems I would strongly advise using 64-bit Windows and the PHP version 7 releases from http://windows.php.net/download as this will support native PHP 64-bit integers on 64-bit platforms. This can be useful dealing with disk capacities (usually expressed in bytes) within the SDK which could overflow a 32-bit integer. (Note that even the 64-bit versions of PHP v5.x do not support 64-bit integers). For v7 on Windows you will also need to install the Visual C++ Redistributable for Visual Studio 2015 from Microsoft if you don’t already have it installed – make sure you install the version appropriate to your PHP environment (32-bit/64-bit).

Extract the downloaded PHP .zip file into a new folder (e.g. C:\PHP) and install the Visual C++ Redistributable appropriate to your version if needed.

Copy the ‘php.ini-development’ file included in the download in this folder to ‘php.ini’ – this will serve as the configuration point for your installation and will be used to enable extensions.

To install the PHP Pear extension on Windows, go to https://pear.php.net/manual/en/installation.getting.php and download the ‘go-pear.phar’ into the same folder you extracted PHP into. Install it from a command prompt opened into the same folder you extracted PHP into using:

php go-pear.phar

Note: if you are not using an ‘administrator’ command prompt you will need to change the default path for ‘pear.ini’ to something other than C:\Windows – I just placed it in the PHP directory (C:\PHP\pear.ini).

You will need to double-click the ‘PEAR_ENV.reg’ file to add appropriate environment variables for PEAR to your user account to the Windows registry. You should also add the directory you extracted PHP into to your Windows PATH – this is under Windows ‘Advanced System Settings’. Note that you will need to re-open a command prompt for PATH changes to take effect.

Adding required PHP extensions on Windows

To install the ‘HTTP_Request2’ and ‘Log’ modules from a Windows Command prompt type:

pear install HTTP_Request2 Log

The mbstring (multibyte) extension is usually present (in the ‘ext’ subdirectory of your PHP installation) as ‘php_mbstring.dll’ on Windows, but not enabled by default. The same is true for the openssl extension. To enable these extensions edit your php.ini file and find the lines:

;extension=php_mbstring.dll
:::
;extension=php_openssl.dll

And remove the leading semi-colons (;) then save the file. You can test whether the modules are loaded correctly by typing ‘php -m’ and checking that ‘mbstring’ and ‘openssl’ are listed in the output in the [PHP Modules] section.

VMware PHP SDK Files

The PHP SDK itself can be extracted to any convenient folder – on Linux I’d suggest under your home directory and on Windows pick somewhere easy to locate (e.g. C:\PHPSDK). Ensure that the folder structure remains intact – you should have 3 subdirectories in the extracted SDK named ‘docs’, ‘library’ and ‘samples’. The essential one (and only one required to actually use the SDK) is the ‘library’ folder.

Configuration.ini (pear/Log configuration file)

The Vmware documentation makes no mention of it, but the ‘ServiceAbstract.php’ file included in the library file (/library/VMware/VCloud/ServiceAbstract.php) relies on both the Pear Log module (which we installed) and a text file ‘Configuration.ini’ which is read at various points in the file to determine logging options for API interactions.

This is very useful (once you know about it), but the file itself is not supplied with the PHP SDK download and must be manually created. To do this, create a new text file in the folder where you will be working with the SDK (e.g. C:\PHPSDK) named ‘Configuration.ini’ and specify the contents as:

[log_section]
log_handler_name=file
log_file_location=phpsdk.log
log_level=PEAR_LOG_DEBUG

This will log all API interactions to a text file (phpsdk.log) in the current directory and is incredibly useful for troubleshooting – you can control the verbosity of the logging by changing the log_level parameter (as well as the log filename and type using the other options). For full documentation on available options and values see the Pear Log documentation.

config.ini (VMware PHP SDK configuration file)

The last piece of configuration required is to copy the file ‘config.ini’ from the PHP SDK ‘samples’ folder into the location where you will be developing your PHP code. You should edit the copied file and ensure that the section that reads:

set_include_path(implode(PATH_SEPARATOR, array('.','../library',get_include_path(),)));

Is updated to refer to the location where your PHP SDK ‘library’ folder exists. For example, on Windows if the PHP SDK is extracted to C:\PHPSDK then this line should be updated to read:

set_include_path(implode(PATH_SEPARATOR, array('.','C:\PHPSDK\library',get_include_path(),)));

To check whether everything is working, try creating a file ‘test.php’ in the root of your working directory (where you’ve edited config.php and saved Configuration.ini)

<?php
  include './config.php';
?>

If you’ve configured everything correctly then running this from a command prompt php test.php should return with no output or errors.

Note: If you are using PHP 7 on Windows you may get a warning from the Log.php file included by the SDK which looks similar to:

PHP Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; Log has a deprecated constructor in C:\PHP\pear\Log.php on line 38

This is harmless and safe to ignore, but if it annoys you and you want to prevent it being displayed you can add a new line to the Log.php file specified in the warning message as below just prior to the ‘public static function factory($handler, $name = ”, $ident = ”,…) line:

public function __construct(){}

You will proabbly also need to add this line after the ‘class’ definition in the pear/Log/file.php file.

In the next parts of this series I’ll be using this environment to show you how to achieve some useful interactions with a vCloud platform using the Perl SDK, I’ll update this post with links once these are posted. As always, please leave any feedback in the comments, I try to answer as much as I can.

Jon.

Create an empty vApp in vCloud Director

Sometimes you just need to create a new vApp with no contents at all – maybe for testing, or maybe you want to populate it with VMs built ‘from scratch’ rather than cloned from templates. This is easy to do in the vCloud Director web UI – you just skip the addition of any VM templates or new VMs and can easily create empty vApps, but how about programatically?

The VMware documentation is remarkably slim in this regard – all the documented methods I could find for vApp creation require either cloning from existing vApp templates, from existing VMs or from uploaded OVF files.

So how do we create a brand-new empty vApp? Turns out it’s pretty simple – once you discover the ‘composeVApp’ method on an Organization VDC supports creation of empty vApps.

If using the REST API we can simply create an XML body document of type ‘composeVAppParams’ and submit it against the OrgVDC’s /action/composeVapp link.

An example XML document body could be:

<?xml version=”1.0″ encoding=”UTF-8″?>
<ComposeVAppParams
name=”MyEmptyVapp”
xmlns=”http://www.vmware.com/vcloud/v1.5″
xmlns:ovf=”http://schemas.dmtf.org/ovf/envelope/1″>
<Description>My vApp Description</Description>
<AllEULAsAccepted>true</AllEULAsAccepted>
</ComposeVAppParams>

We then ‘POST’ this document body to the link: ‘https://<Cloud Server DNS name or IP address>>/api/vdc/<ID of our VDC>/action/composeVApp’ not forgetting to add a header of ‘Content-Type: application/vnd.vmware.vcloud.composeVAppParams+xml’ to the POST request.

If we want to accomplish the same thing using PowerShell / PowerCLI it’s easy too (once connected to our cloud using Connect-CIServer):

$vapp = New-Object VMware.VimAutomation.Cloud.Views.ComposeVAppParams
$vapp.Name = “MyEmptyVapp”
$vapp.Description = “My vApp Description”
$myorgvdc = Get-OrgVdc -Name ‘My OrgVDC Name’
$myorgvdc.ExtensionData.ComposeVApp($vapp)

No idea if this is ‘officially’ supported or not – so use at your own risk and be aware that the implementation could change in a future release and break this (although I’d be surprised as this is almost certainly the action that the vCD web UI is submitting ‘behind the scenes’ when you manually create an empty vApp).

Jon.

Client Integration Plugin madness

One of the frustrations dealing with the vSphere Web Client has always been the requirement for a browser plugin to import/export OVF templates. In vSphere 6 and vCloud Director 8 this has reached a whole new level of frustration. The issue is that both vSphere 6 and vCloud Director 8 offer to download and install a package called ‘VMware-ClientIntegrationPlugin-6.0.0.exe’. In an ideal world this plugin once correctly installed would work for the OVF import/export/upload/download functionality in both products… right?

Meanwhile in this world, although the package names are identical, the functionality is not – if you’ve installed the vSphere version then you can’t upload/download OVFs or ISOs in vCloud Director and if you’ve got the vCD variant installed then vSphere OVF import/export doesn’t work. Uninstalling and reinstalling the ‘correct’ version fixes the problem (until you need the ‘other’ one again), but can be easier said than done – particularly in the case of shared desktop server administration environments where other users having a browser session open will prevent reinstalling browser plugins.

So how do you tell the two packages apart? In the current releases of vSphere 6.0Update1 and vCloud Director v8.0 the packages are significantly different sizes:

vCloud Director 8 vSphere 6.0 Update 1
Package Filename VMware-ClientIntegrationPlugin-6.0.0.exe VMware-ClientIntegrationPlugin-6.0.0.exe
File Version 11.0.0.2826 10.0.0.3637
Product Version 6.0.0.2826 6.0.0.3637
File Size 48.8 MB 94.9 MB

So the easiest way to tell is the smaller 48.8 MB file is vCD and the larger 94.9 MB one is vSphere.

If (as I do) you often need to use both versions then maybe consider setting up separate management desktops (or virtual apps) for each so you can easily reach one that’s going to work for you.

Hopefully VMware will fix this in a future release and provide a single integration plugin that works across both products.

Update – 18th March 2016
VMware have just released vCloud Director for Service Providers v8.0.1 (http://pubs.vmware.com/Release_Notes/en/vcd/801/rel_notes_vcloud_director_801.html) which appears to have reverted the vCD Client Integration product to version 5.6.0 – there is also mention in the release notes on the possible clashes between vSphere and vCD client integration toolsets so it appears that VMware are at least aware of the issue.

Working with vCloud Metadata in PowerCLI – Part 1

Way back in 2012 Alan Renouf created a PowerCLI module to deal with manipulation of metadata entries for vCloud Director objects – this can be incredibly useful to track related information for these objects. The vCD metadata functionality was enhanced in v5.1 (and then later in 5.5, 5.6 and 8.0) – in particular typed values were added with functionality to use date/time, boolean and numeric values (as well as free-form string text). Also added were security levels so that metadata could be made read-only or hidden (from a tenant perspective) but still accessible/visible to system owners. I’ve taken the PowerShell module that Alan published here and updated it to cope with these enhancements. I’ve also updated the returned fields/views to include the extra attributes (where present) such as security levels of metadata entries.

Note that I am definitely not a professional developer (and most of my PowerShell knowledge comes from Google) so there’s probably significant room for improvement in the code – comment back if you have suggestions for improvement and I’ll update this post.

Use of the module requires a valid connection to a vCloud instance (using Connect-CIServer). This won’t work for versions prior to v5.1 (most of my testing has been with PowerCLI 6 against a v8 vCD deployment) so please use at your own risk and make sure you thoroughly test your own scenarios. I’ll write a follow-up post detailing some example code and usage scenarios which people may find useful in the next few days.

I’d suggest copy/pasting the code (below) into a PowerShell module (.psm1) file and including the module in your scripts as needed.