Using VMware Container Service Extension (CSE)

Yesterday I wrote showing the currently available container hosting options from VMware. As we’ve recently deployed one of these options – CSE in our environment I thought it would be useful to show a sample workflow on how the service functions and how customers can use this to deploy and manage both CSE clusters, and also micro-service applications onto those clusters.

There are a few requirements on the tenant side which must be completed prior to any of this working:

  • An Organizational Administrator login to the vCloud platform where CSE is deployed.
  • Access to a virtual datacenter (VDC) with sufficient CPU, Memory and Storage resources for the cluster to be deployed into.
  • An Org VDC network which can be used by the cluster and has sufficient free IP addresses in a Static Pool to allocate to the cluster nodes (clusters take 1 IP address for the ‘master’ node and an additional address for each ‘worker’ node deployed).
  • A client prepared with Python v3 installed and the vcd-cli and container-service-extension packages installed on it.
  • The {$HOMEDIR}\.vcd-cli\profiles.yaml file edited to add the CSE extension to vcd-cli.
  • The kubectl utility installed to administer the Kubernetes cluster once deployed and working. kubectl can be obtained most easily from here.

Detailed instructions for the client setup can be found in the CSE documentation at https://vmware.github.io/container-service-extension/#tenant-installation. Note that on a Windows platform the .vcd-cli folder and profiles.yaml file will not be automatically created, but you can do this manually by

from a DOS prompt and then using vcd-cli to log in and out of your cloud provider. This will cause profiles.yaml to be generated in the .vcd-cli folder. The profiles.yaml file can then be edited in your favourite text editor to add the required CSE extension lines.

Deploying a Cluster with CSE

When deploying a cluster, you will need to know the storage profile and network names which the cluster will use, the easiest way of obtaining these is either from the vCloud portal, or using the vcd vdc info command when logged in to your environment:

image

If you have multiple VDCs available to you, the ‘vcd vdc use <VDC Name>’ command to set which one to work with.

In this example we will be using the highlighted entries (the ‘Tyrell-Servers’ network and the ‘CHC Performance’ storage profile).

To retrieve a list of available cluster deployment templates that the Service Provider has made available to us we can use the vcd cse template list command:

image

In this example only the Photon OS template is available and is also the default template. CSE actually comes with 2 profiles (Photon OS v2 and Ubuntu Linux 16-04, but I’ve only installed the Photon OS v2 template in my lab environment). The default template will be used if you do not specify the ‘–template’ switch when creating a cluster.

The cluster create command takes a number of parameters which are documented in the CSE page:

image

Be careful with the memory specification is it is in MB and not GB.

I chose to generate a public/private key to access the cluster nodes without needing a password, but this is optional. If you want to use key authentication you will need to generate a key pair and specify the public key filename in the cluster creation command using the –ssh-key switch.

To deploy a cluster with 3 worker nodes into our VDC where each node has 4GB of RAM and 2 CPUs using my public key and the network and storage profile identified above:

image

The deployment process will take several minutes to complete as the cluster VMs are deployed and started.

In to the vCloud Director portal, we can see the new vApp that has been deployed with our master and worker nodes inside it, we can also see that all 4 VMs are connected to the network we specified:

image

To see the details of the nodes deployed we can use ‘vcd cse node list <cluster name>’:

image

To manage the cluster with kubectl, we need a configuration file for Kubernetes containing our authentication certificates. kubectl by default looks for a file named ‘config’ in a folder called ‘.kube’ under the current user’s home directory. The config file itself can be downloaded using CSE. To create the folder and write the config file:

image

If you have multiple deployed clusters you can create separate config files for each one (with different file names) and use the –kubeconfig= switch to kubectl to select which one to use.

To test kubectl we can ask for a list of all containers (‘pods’ in Kubernetes) from the cluster, the ‘–all-namespaces’ switch shows system pods as well as any user created pods (which we don’t have yet). This must be run from a machine that has network connectivity with the deployed nodes (the ‘Tyrell-Servers’ network in this example):

image

 

Cluster Scaling

Adding Nodes to Clusters

If we need to add worker nodes to a cluster this is accomplished with the ‘vcd cse node create’ command. For example, we can add a 4th worker node to our ‘myCluster’ cluster as follows:

image

The node list now shows our cluster with 4 worker nodes including our new one:

image

Removing Nodes from Clusters

To remove a cluster member is just as easy using the ‘vcd cse node delete’ command:

image

You will be prompted to confirm the node deletion, and if you have deployed container applications you should ensure that the node is properly drained and/or replica sets and deployments configured correctly so that the node deletion will not impact your applications.

 

Cluster Host Affinity

One item that CSE does not deal with yet is creating vCloud Anti-Affinity rules to ensure that your worker nodes are spread across different physical hosts. This means that with appropriately configured applications a host failure will not impact on the availability of your deployed services. It is reasonably straightforward to add anti-affinity rules in the vCloud portal though.

Our test cluster is back to 3 nodes following the deletion example:

image

In the vCloud portal we can go to ‘Administration’ and select our virtual datacenter in the left pane, we will then see an ‘Affinity Rules’ tab:

image

Clicking the ‘+’ icon under Anti-Affinity Rules allows us to create a new rule to keep our worker nodes on separate hosts:

image

Provided the VDC has sufficient backing physical hosts, the screen will update to show the new rule and that it has successfully been applied and separated the worker nodes to different hosts:

image

Of course if the host running the master node experiences a failure then this will be unavailable until the VMware platform restarts the VM on another host.

 

Application Deployment using kubectl

Of course now that our cluster is up and running, it would be nice to actually deploy a workload to it. The ‘sock shop’ example mentioned in the CSE documentation is a good example application to try as it consists of several pods running in a separate namespace.

First we use kubectl to create the namespace:

image

Now we can deploy the application into our name space from the microservices-demo project on github. You can read more about the sock-shop demo app at https://github.com/microservices-demo/microservices-demo.

C:\Users\jon>kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
deployment "carts-db" created
service "carts-db" created
deployment "carts" created
service "carts" created
deployment "catalogue-db" created
service "catalogue-db" created
deployment "catalogue" created
service "catalogue" created
deployment "front-end" created
service "front-end" created
deployment "orders-db" created
service "orders-db" created
deployment "orders" created
service "orders" created
deployment "payment" created
service "payment" created
deployment "queue-master" created
service "queue-master" created
deployment "rabbitmq" created
service "rabbitmq" created
deployment "shipping" created
service "shipping" created
deployment "user-db" created
service "user-db" created
deployment "user" created
service "user" created

We can see deployment status by getting the pod status in our namespace:

image

After a short while all the pods should have been created and show a status of ‘Running’:

image

The ‘sock-shop’ demo creates a service which listens on port 30001 on all nodes (including the master node) for http traffic, so we can get our master node IP address from ‘vcd cse node list myCluster’ and open this page in a browser:

image

And here’s our deployed application running!

image

Summary / Further Reading

Of course there’s much more that can be done with Docker and Kubernetes, but hopefully I’ve been able to demonstrate how easily a cluster can be deployed using CSE and how micro-services applications can be run in this platform.

For further reading on kubectl and all the available functionality I can recommend the Kubernetes kubectl documentation at https://kubernetes.io/docs/reference/kubectl/overview/. In fact the entire Kubernetes site is well worth a read for those considering deployment of these architectures.

As always, comments, feedback, suggestions and corrections always welcome.

Jon.

VMware Container Solutions

VMware appears to have gone a little ‘mad’ with regards to containerisation (or containerization for any American readers) lately. Last week saw the release of Pivotal Container Service (PKS) as launched at VMworld 2017 US back in August. With this there are now a total of three VMware technologies all enabling customers to run micro-service applications in their environments. So why three different products to do the same thing? Well, they are targeted at different environments and use-cases, and actually it makes a lot of sense for VMware to have solutions for all 3 scenarios. Of course there’s always the 4th option of building your own container hosting platform from scratch on a VMware platform, but lets concentrate for now on those provided by VMware.

So what are the available solutions?

Pivotal Container Service (PKS)

This was announced at VMworld 2017 and recently became available for download. PKS a full stack solution to manage both initial formation of clusters to support containerised applications and manage their ‘day 2’ operations once deployed. While PKS could be deployed in an Enterprise environment (and may be for organisations using containerised applications at significant scale) it appears to be more targetted towards cloud service providers wishing to offer a managed/hosted platform for multiple tenants.

vSphere Integrated Containers (VIC)

VIC has been around for a while now (this was based on VMware’s Project Bonneville which started back in 2015), recently VIC has been updated to v1.3.1 and gained the capability to use Docker hosts natively at version 1.2 (prior to this VMware Host Containers had to be used). VIC supports vSphere version 6.0 and upwards and is primarily targeted at Enterprise customers wishing to provide a managed container hosting environment within their own infrastructure.

Container Service Extension (CSE) for vCloud Director

Sitting somewhat in between the other offerings, VMware has also released CSE via an open source Github repository. CSE is targeted at Service Providers using VMware’s vCloud Director platform who wish to make delivering container hosting to tenants much easier. It provides an extension to vCloud Director which allows the creation and maintenance of clusters of VMs providing Docker in Kubernetes clusters.

Comparing the solutions

The table below shows a summary of the options

 

Solution Pivotal Container Service (PKS) vSphere Integrated Containers (VIC) Container Service Extension (CSE)
Current release / link v1.0.0 GA v1.3.1 v0.4.2
Container Runtime Docker Docker & Virtual Container Host (VCH) Docker[1]
Container Management Kubernetes VMware Admiral Kubernetes[1]
Container OS BOSH Virtual Container Host (Photon OS based) Any (Ubuntu & Photon provided)
Container Registry VMware Harbor VMware Harbor Any (None provided)
Deployed to Bare metal / VM Bare metal / vSphere VM vCloud Director Virtual Datacenter (VDC)
Multi-tenant Supported Yes Yes Yes
Network Support VMware NSX-T vSphere & VMware NSX-V Org VDC Networks (vCloud Director) / VMware NSX-V
Licensing / Support Open Source, Paid Support available from Pivotal Open Source, vSphere S&S Support covers VIC Open Source, Service Provider Support
Primarily Targeted At Service Providers & Enterprise using containers at scale Enterprise Service Provider / vCloud Tenants

[1] CSE allows service providers to provide any versions of Docker and Kubernetes in their templates. This can allow much more up-to-date versions than those supported in PKS or VIC.

CSE deployment for a Service Provider

I’ve recently been involved with deploying CSE to our own vCloud Director hosting platform, the VMware github.io page is extremely useful and well documented to help get up and running with CSE so I won’t repeat this here.

The main advantages it offered us as a service provider:

No new billing / Integration required
This is a huge deal for most service providers, it can be time-consuming (and therefore expensive) to integrate any new platform offering, not just in the time taken to deploy the components and get them all working correctly (including alerting, monitoring etc.) but what is often overlooked is the additional effort required to correctly meter platform consumption and ensure that customer bills are correctly prepared and reflect the resources their environments have consumed. Taking the ‘full stack’ of PKS and offering this as a service would involve considerable work, but with CSE this workload is effectively neutralised since the clusters deployed are directly into tenant virtual datacenters (VDCs) and service providers will already be metering and billing customers for resources consumed in tenant VDCs.

No new licensing
As there is no additional licensing for CSE this makes it extremely easy to deploy in a service provider platform.

No new security model
Since all tenant interaction with CSE is via the vCloud Director API, there is very little work required (if any) to publish the service to customers since most Service Providers will already be making the vCD API accessible to their tenants. Additionally, since the CSE service itself integrates directly into vCloud Director’s RabbitMQ backend it is likely that very few security or firewall changes are required either.

Flexible environment
One of the really nice aspects of CSE is that the templates made available to tenants to deploy into clusters are fully customisable. This means that service providers can chose to offer additional templates beyond the 2 examples provided ‘out of the box’ with CSE. For example, if a Service Provider wishes to offer a ‘bleeding edge’ template which has the absolute latest releases of Docker and Kubernetes (and maybe add additional packages to the deployed images to include Harbor and maybe a ceph or glusterfs client) this is reasonably straightforward and easy to do. The downside of this is that maintenance and updating these templates has to be performed regularly to ensure that they include all appropriate bug-fixes and security patches and updates.

Note that at this time the VMware documentation doesn’t yet include instructions for modifying or adding additional CSE templates, I’ll write up a separate post on how I did this in our environment which may prove useful for others deploying CSE into their own environments.

Other CSE Considerations

Of course no platform is ever perfect, and the following should be noted too as potential pitfalls or things to be aware of when considering CSE:

No registry service by default
Both PKS and VIC provide container registry services (to deal with storing, securing, scanning and replicating container images) based on VMware’s Project Harbor which is a very nice registry system. While Harbor can be added to clusters deployed with CSE, it isn’t there by default in the templates currently provided with CSE.

No persistent or dynamic Kubernetes volumes
Containers by design are meant to be ephemeral and stateless, so they shouldn’t be storing any persistent data or require backup protection. Of course most business applications (including those provided by containerised images) generally need some form of permanent/persistent storage behind them. In Kubernetes environments this is generally accomplished by the concept of persistent volumes which are mapped into containers at runtime and allow data to be retained. In CSE currently there is no provider for persistent volumes which means that external storage is required. This can however be delivered from a variety of sources – other databases running in the environment, file or object storage services etc. I’m currently looking into easy ways to add dynamically provisioned persistent volumes to a CSE cluster and will write this up as a separate post when done.

Template maintenance
As mentioned previously, the templates deployed by CSE are completely flexible and can be easily customised by editing their deployment scripts, the process of maintaining the templates is reasonably manual though and requires stopping the CSE service, patching and updating the templates in a vCloud Director shared catalog and then re-enabling the CSE service. It would be nice to have a way to automate the rebuild of templates and to allow the CSE services to remain online while this is happening.

Relative immaturity
The CSE service is a very ‘early’ release and a number of bugs are still being fixed. There’s nothing too serious that I’ve encountered yet, but occasionally templates will fail to build correctly (generally due to failures in 3rd party repositories) and it can take time to identify and resolve these issues. Fortunately the VMware developers have been extremely fast and active in responding to issues raised in the CSE github repository and every issue I’ve found has been very quickly fixed.

Summary

Hopefully this post has given you an idea of the capabilities and features available in the 3 current VMware container hosting solutions and given you a better idea of what the Cloud Service Extension for vCloud Director does. I’m aiming to write some follow-up posts on CSE including how we have deployed it into our environment, how new templates can be created (and existing templates customised) and how to address some of the current missing features such as integrating Harbor as a registry service in future posts. Let me know in the comments if there are any areas you are particularly interested in and I’ll see what I can do. I’ve also written a session abstract proposal to present a Service Provider view of CSE at VMworld US 2018, so hoping that that will be accepted too.

References / Links

Some of the components mentioned may not be familiar so I’ve provided links to each one below:

BOSH: https://bosh.io/
Docker: https://www.docker.com/
Harbor: https://vmware.github.io/harbor/
Kubernetes: https://kubernetes.io/
Ubuntu Linux: https://www.ubuntu.com/
VMware Photon OS: https://vmware.github.io/photon/

As always, comments & corrections welcome, I’m reasonably new to the whole ‘containerised applications’ scene so there may well be inaccuracies in this post(!)

Jon.