Orchestrating Kubernetes On OpenStack

Posted By: DeWayne Filppi on November 1, 2015

OpenStack | OpenStack Summit | TOSCA Cloud Orchestration | Microservices | Kubernetes Orchestration | OpenStack NFV | Kubernetes Cluster

In my previous post, I discussed enhancing the original, basic Kubernetes plugin into a version that was reasonably functional. That version was designed to use Fabric and operate on preexisting machines (virtual or “bare metal”). This post discusses the changes needed to create the same hybrid deployment as before, but hosted on OpenStack.

Try the Kubernetes Orchestration as a Service Tool today!  Go

Executive Summary

For anyone interested in the end result without the technical details, the advancement since the previous post consists of porting the plugin for use in a Cloudify managed environment, and creating an Openstack blueprint that:

  1. Provisions virtual machines to hold a Kubernetes cluster and a MongoDB instance.
  2. Installs Kubernetes and MongoDB separately.
  3. Deploys and configures a distributed NodeJs app on the hybrid platform: NodeJS and the Nodecellar webapp as a Kubernetes pod, and MongoDB (mongod) and the related Nodecellar database.

The code is on github. A video describing the orchestration project at a high level is available on the Cloudify website.

Some Technical Details

Replacing Fabric

Using the Fabric plugin is great both for quickly developing blueprints and plugins, and also for non-cloud deployment targets without a manager. For a cloud such as OpenStack, the standard managed approach is more desirable. All the actual fabric task code resides in the start-master-ubuntu14-tasks.pyand start-node-ubuntu14-tasks.py files. The porting effort boils down to replacing Fabric calls (e.g. run, sudo, put) with calls to the python standard “subprocess” library. A representative example of such a translation:


Another minor side effect of replacing the Fabric plugin is removing the Fabric configuration from the blueprint itself. The plugin executor also gets changed from central_deployment_agent to host_agent.

Getting Late Bound Runtime Properties

One side effect of moving to Openstack is that IP addresses are no longer statically defined. Since the blueprint must configure the Kubernetes microservice the IP address of the MongoDB instance, some means of getting that information at runtime, after the MongoDB host has been started, must be used. The intrinsic get_attribute function won’t work in this case, so the plugin supplies a custom syntax for injecting runtime properties:@{ node, attribute}. Example:

        - "['spec']['containers'][0]['env'][0]['value'] = '@{mongod_host,ip}'"

The custom syntax is implemented with some simple regex manipulation, and has the added virtue of being more readable than the intrinsics anyway.


The port of the “local mode” blueprint and plugin to OpenStack is straightforward, as shown above. The remainder of the blueprint construction is just ensuring the proper inter-node dependencing and defining security groups and public ips so everything can talk. Next on the list: container monitoring and autoscaling for Kubernetes provided by Cloudify. As always, comments welcome.

For a deeper dive on Cloudify and Kubernetes in a hybrid environment on OpenStack, watch the video below:

blog comments powered by Disqus