Cloudify Meets Kubernetes - Enhancing The Integration

Posted By: DeWayne Filppi on October 26, 2015


OpenStack | OpenStack Summit | TOSCA Cloud Orchestration | Microservices | Kubernetes Orchestration | OpenStack NFV | Kubernetes Cluster

Integrating Cloudify with Kubernetes is compelling for those with hybrid environments (containers, virtualization, cloud, bare metal, even hardware), who would like to leverage Kubernetes' container management instead of Cloudify's, and/or leverage Cloudify's event driven workflows to perform autoscaling or other automatic processes. In my last post, I described an initial integration with Kubernetes, which was focused on installing Kubernetes itself via a Cloudify blueprint and associated plugin. This post reviews more recent work which focuses more on integrating Kubernetes microservices into the Cloudify ecosystem.

Microservices

Given that the Kubernetes raison d'etre is microservices, it made sense to enhance the integration in that direction as quickly as possible. By their design, microservices are highly dynamic in nature. Small separately deployable building blocks of applications. In Cloudify, most dynamism is provided by workflows, So an easy first step was to add a few workflows that would operate a Kuberenetes cluster. Such workflows also foreshadow the longer term goal of using Cloudify autoscaling on Kubernetes clusters.

Workflows

These workflows just delegate to kubectl on the master. They all share a parameter called master. The master parameter is set to the node name of the Kubernetes master that the workflow is to be run against. Another pattern is to provide many (but not all) of the parameters that kubectl accepts, but using the overrides property as a catch all. These workflows are provided as samples. It should be understood that any actual producion blueprint would only implement workflows relevant to the blueprint purpose, which may or may not include the following, and probably contain others.

Workflow name| Description ------ | ------- kuberun | kubectl run equivalent kubeexpose | kubectl run equivalent kubestop | kubectl stop equivalent kubedelete | kubectl delete equivalent

The workflows are implemented in the workflows.py file. A sample implementation:


Try the Kubernetes Orchestration as a Service Tool today!  Go

Microservice node

The workflows provide a convenient front end for commanding a Kubernetes cluster, making Kubernetes accessible via both the Cloudify CLI and REST API. This is good, but stops short of providing any kind of TOSCA integration. A TOSCA orchestration is defined by it's nodes and relationships, so a microservice needs a node abstraction of it's own.

The development of the cloudify.kubernetes.Microservice node occured in three phases, that address three use cases:

  1. Cloudify oriented definition.
  2. Embedded Kubernetes YAML.
  3. Reference to external Kubernetes manifest, with overrides.

It should be noted that this node requires use of the cloudify.kubernetes.relationships.connected_to_masterin order to grab connection information. Also, I added install (default true) and install_docker (default false) properties to the cloudify.kubernetes.Master node, to enable docker installation, and to simply connect to a running Kubernetes cluster rather than install it.

Cloudify oriented definition

This mode of operation abstracts away the Kubenetes API and defines properties that automate service creation using the kubectl run and expose commands in a very specific way. The relevant properties: Property | Description --------------- | --------------------- name | service name image | image name port | service listening port targetport | container port (default:port) protocol | TCP/UDP (default TCP) replicas | number of replicas (default 1) runoverrides | json overrides for kubectl "run" expose_overrides| json overrides for kubectl "expose"

The plugin function that implements this use case is in the tasks.py file.

Kubernetes Embedded YAML

Rather than simply running commands using kubectl options, services can (should be) configured using native Kubernetes YAML manifests. The second mode of operation allows embedding Kubernetes native manifest YAML directly in the Cloudify blueprint. An example:

By embedding the YAML directly in the blueprint, it is a simple matter to use intrinsic functions to populate it with other blueprint values. To use embedded YAML, only the config property is used. All subordinate YAML statements are exported (after substitution) to a standard YAML manifest that is fed to the kubectl create statement on the master node.

Kubernetes File With Overrides

This mode came from imagining that users might want to maintain their existing Kubernetes manifests, yet reference them from a Cloudify blueprint. For a hybrid cloud environment, this only makes sense in a dynamic orchestration if existing values in the manifest can be overridded to reflect values in the blueprint. For example, if we imagine a Kubenetes hosted web tier combined with a non-Kubernetes hosted back end, we would want to feed the host and port of the backend service without modifying the Kubernetes manifest file. Overrides let you do this. An example configuration:

Note that the overrides are a bit verbose to accomodate quoting challenges. The main need for using the intrinsic concat function is to add required quotes to string values in the Kubernetes manifest. Node that the actual substitution is python code, referencing the YAML document in terms of nested dicts that result from parsing the target file (service.yaml) in this case, by the standard Python yaml module.

Conclusion

The code is at the same place version 1.5. This iteration has made the first forays into a decent Kubernetes integration, with much left to do. Comments welcome.

Bonus

Come see our very own Sivan Barzily and Ran Ziv who will be presenting a demo at OpneStack Tokyo in the Expo Hall (Marketplace Theater) on "How to Build Cloud Native Microservices with Hybrid Workloads on OpenStack" at 4pm on Tuesday, October 27th.

For a deeper dive on Cloudify and Kubernetes in a hybrid environment on OpenStack, watch the video below:


blog comments powered by Disqus