Cloud On-Boarding - The True Story

Posted By: Dotan Horovits on June 3, 2012


Everyone wants to be in the cloud. Organizations have internalized the notion and have plans in place to migrate their applications to the cloud in the immediate future. According to Cisco’s recent global cloud survey:

Presently, only 5 percent of IT decision makers have been able to migrate at least half of their total applications to the cloud. By the end of 2012, that number is expected to significantly rise, as one in five (20 percent) will have deployed over half of their total applications to the cloud.

But that survey also reveals the fact that on-boarding your application to the cloud “is harder, and it takes longer than many thought”, as David Linthicum said in his excellent blog post summarizing the above Cisco survey. Taking standard enterprise applications that were designed to run in the data center and on-boarding them to the cloud is in essence a reincarnation of the well-known challenge of platform-migration, which is never easy. But why is there a sense of extra difficulty in on-boarding to the cloud? The first reason David identifies for the extra difficulty is the misconception that cloud is a “silver bullet”. Such “silver bullet” misconception can lead to lack of proper design of the system, which may result in application outage, as I outlined in my previous blogs. Another reason David states for the extra difficulty is the lack of well-defined process and best practices for on-boarding applications to the cloud:

What makes the migration to the cloud even more difficult is the lack of information about the process. Many new cloud users are lost in a sea of hype-driven desire to move to cloud computing, without many proven best practices and metrics.

It is about time for a field-proven process for on-boarding applications to the cloud. In this post I’d like to start examining the accumulated experience in on-boarding various types of applications to the cloud, and see if we can extract a simple process for the migration. This is of course based on the experience of me and my colleagues, and not the result of any academic research, so I would very much like for it to serve as a cornerstone to trigger an open discussion in the community, sharing experience of different types of migration projects and applications, and iteratively refine the suggested process based on the joint experience.

Examining the n-tier enterprise application use case

As a first use case, it makes sense to examine a classic n-tier enterprise application. For the sake of discussion, I’d like to use common open-source modules, assuming they are well-known and to allow us to play with them freely. For the test-case application let’s take Spring’s PetClinic Sample Application and adapt it. We’ll use Apache Tomcat web container and Grails platform for the web and business logic tiers, and MongoDB NoSQL DB for the data tier, to simulate a Big Data use case. We can later add the Apache HTTP Server as a front-end load balancer. To those who start wondering, I’m not invested in the Apache Foundation, just an open-source enthusiast.

First step of on-boarding the application to the cloud is to identify the individual services which comprise the application, and the dependency between these services. In this use case, since the application is well-divided into tiers, it is quite easy to map the services from the tiers. Also, the dependency between the tiers is quite clear. For example, the Tomcat instances are dependent on the back-end database. Mapping the application’s services and their dependency will help us determine which VMs we should spin up, of which images, how many of each, and in which order. In later posts I’ll address additional benefits of this modeling.

Next let’s dive into the specific services, and see what it takes to prepare them for on-boarding to the cloud. First step is to identify the DevOps phases which comprise the service’s lifecycle. Typically, a service will undergo a lifecycle of install-init-start-stop-shutdown. We should capture the operational process for each such phase and formalize it into an automated DevOps process, for example in the form of a script. This process of capturing and formalizing the steps also helps exposing many important issues that need to be addressed to enable the application to run in the cloud, and may even require further lifecycle phases or intermediate steps. For example in the case of Tomcat we may want to support deploying a new WAR file to Tomcat without restarting the container. Another example for MongoDB is that we noticed that it may fail starting up without failure indication in the OS process status, so simple generic monitoring of the process status wasn’t enough and we needed a more accurate and customized way to know when the service successfully completed start-up and is ready to serve. Similar considerations arise with almost every application. I will touch these considerations further in a follow-up post.

With the break-down of the application into services, and the services break-down into their individual lifecycle stages, we have a good skeleton to automate the work on the cloud. You are welcome to review the result of the experimentation available as open-source under CloudifySourice GitHub. On my next post I will further examine the n-tier use case and discuss additional concerns that needed to be addressed to bring it to a full solution.

blog comments powered by Disqus