This page serves as a reference to the Openstack manager blueprint, which is used for bootstrapping Cloudify on Openstack
This reference only explains the structure and various values in the blueprint. For better understanding of it, make yourself familiar with Cloudify blueprints DSL, the Cloudify Openstack plugin, and the Manager Blueprints Authoring guide.
keystone_usernameUsername on Openstack.
keystone_passwordPassword on Openstack.
keystone_tenant_nameTenant name on Openstack.
keystone_urlAuthorization endpoint (AKA Keystone) URL.
regionRegion on Openstack (e.g. “region-b.geo-1”).
manager_public_key_nameThe name on Openstack of the keypair that will be used with the Cloudify manager.
agent_public_key_nameThe name on Openstack of the keypair that will be used with Cloudify agents.
image_idThe id of the image to be used for the Cloudify manager host (Ensure compatibility with the Prerequisites section).
flavor_idThe id of the flavor to be used for the Cloudify manager host (Ensure compatibility with the Prerequisites section).
external_network_nameThe name of the external network on Openstack.
use_existing_manager_keypairA flag for using an existing manager keypair (should exist both on Openstack with
manager_public_key_nameas its name, and locally at
use_existing_agent_keypairA flag for using an existing agent keypair (should exist both on Openstack with
agent_public_key_nameas its name, and locally at
manager_server_nameThe name for the manager server on Openstack.
manager_server_userThe name of the user that will be used to access the Cloudify manager (Default:
manager_server_user_homeThe path to the home directory on the Cloudify manager of the user that will be used to access it (Default:
manager_private_key_pathThe path on the local machine to the private key file that will be used with the Cloudify manager. This key should match the public key that is set for the
agent_private_key_pathThe path on the local machine to the private key file that will be used with Cloudify agents. This key should match the public key that is set for the
agents_userThe default username to be used when connecting into applications’ agent VMs (for agent installtion).
nova_urlExplicit URL for Openstack Nova (compute) service endpoint.
neutron_urlExplicit URL for Openstack Neutron (networking) service endpoint.
resources_prefixResource prefix to be attached to cloud resources’ names.
use_external_resourceIf true, it will be used theexternal network to which the router connects for access to the outer world
management_network_nameThe name of the management network.
management_subnet_nameThe subnet name.
management_routerThe router name.
manager_security_group_nameThe name for your management security group.
agents_security_group_nameThe name for your agents security group.
manager_port_nameThe name of the port to be associated with the manager server.
manager_volume_nameThe name of the volume to the attached to the manager server.
Some of the required inputs may actually be left empty when appropriate, standard Openstack environment variables are set in place before calling the
cfy bootstrap command. These variables are:
These settings are available in the Openstack Horizon dashboard (Look for API credentials).
Note that in order to enable this, these inputs technically have an empty (
"") default value. This, however, does not mean they’re not mandatory.
Additionally, the following optional inputs may also be set by using standard Openstack environment variables:
The blueprint builds the following topology on Openstack:
The “Openstack manager” blueprint contains the following nodes:
cidris arbitarily set to be
cloudify.openstack.nodes.Port, which will serve as the manager server’s entry point to network configuration. Its purpose is to acquire a fixed private ip inside the management_subnet, this will enable the assignment of the same private ip to a different host, in case the manager server fails. It defines 3 relationships, to various network related nodes.
cloudify.openstack.nodes.Volume, which will serve as the manager server’s persistent storage device. Its purpose is to store all docker related files, in order to be able to recover from a machine failure.
cloudify.nodes.FileSystem, it will create a mount point on the manager server, that is mounted on the volume node. Its purpose is to mount the /var/lib/docker directory on the manager server to a cinder volume. By doing so, all the information docker will write to this directory, will be persisted even if the server is terminated or is inaccessible. To achieve this, it defines 2 relationships:
The manager’s node configure lifecycle operation is mapped to a method in the configure.py module which takes the following actions:
This manager blueprint provides support for recovering from manager failures.
Well, think of a scenario where you have already uploaded some blueprints and created deployment using this manager. If at a certain point, for some reason, the VM hosting the manager crashes, or maybe even the docker container inside the VM is no longer available, it would be nice to have the ability to spin up another VM and use it as our management server.
This is where some of the cloud awesomeness comes into play. This manager blueprint defines 3 crucial types for making this happen:
cloudify.openstack.nodes.FloatingIP - Provides a way to have a fixed,
detachable public ip for VM’s.
cloudify.openstack.nodes.Port - Provides a way to have a fixed,
detachable private ip for VM’s.
cloudify.openstack.nodes.Volume - Provides a way to have a persistent,
detachable block storage device for VM’s.
Having all of these types available makes recovery a rather straightforward process:
If you think about it, this flow exactly describes a heal workflow, where the failing node instance is the management server. In fact, what we do under the hood is simply call the heal workflow in this manner.
The heal workflow is a generic workflow that allows for the recovery from any node instance failure. To learn more see Heal Workflow
To use this ability we have added a new command in our CLI called cfy recover.
You can use this command from any machine, not necessarily the machine you used to bootstrap your manager. To run it from a different machine, like all other cloudify commands, you must first execute the cfy use command. For example, if we have a manager on ip 192.168.11.66:
cfy use -t 192.168.11.66
From this point onwards, you can execute the recover command if the manager is malfunctioning.
This command is somewhat destructive, since it will stop and delete resources, for this reason, using it will require passing the force flag.
Like we already mentioned, eventually, running the recover will trigger the heal workflow, so the output will look something like this:
cfy recover -f Recovering manager deployment 2015-02-17 16:21:21 CFY <manager> Starting 'heal' workflow execution 2015-02-17 16:21:21 LOG <manager> INFO: Starting 'heal' workflow on manager_15314, Diagnosis: Not provided 2015-02-17 16:21:22 CFY <manager> [manager_15314] Stopping node ... ... 2015-02-17 16:22:02 CFY <manager> [manager_server_1eed2->manager_server_ip_3978e|unlink] Task started 'nova_plugin.server.disconnect_floatingip' 2015-02-17 16:22:12 CFY <manager> [manager_server_1eed2->manager_port_ceb8e|unlink] Sending task 'neutron_plugin.port.detach' [attempt 2/6] 2015-02-17 16:22:30 CFY <manager> [manager_server_1eed2->manager_server_ip_3978e|unlink] Task succeeded 'nova_plugin.server.disconnect_floatingip' 2015-02-17 16:22:30 CFY <manager> [manager_server_1eed2->manager_port_ceb8e|unlink] Task started 'neutron_plugin.port.detach' [attempt 2/6] 2015-02-17 16:22:36 LOG <manager> [manager_server_1eed2->manager_port_ceb8e|unlink] INFO: Detaching port 226704ce-fae5-4c2b-aa82-234515ef9e13... 2015-02-17 16:22:37 LOG <manager> [manager_server_1eed2->manager_port_ceb8e|unlink] INFO: Successfully detached port 226704ce-fae5-4c2b-aa82-234515ef9e13 2015-02-17 16:22:37 CFY <manager> [manager_server_1eed2->manager_port_ceb8e|unlink] Task succeeded 'neutron_plugin.port.detach' [attempt 2/6] ... ... 2015-02-17 16:26:42 LOG <manager> [manager_15314.start] INFO: waiting for cloudify management services to restart 2015-02-17 16:27:35 LOG <manager> [manager_15314.start] INFO: Recovering deployments... ... ... 2015-02-17 16:27:42 CFY <manager> 'heal' workflow execution succeeded Successfully recovered manager deployment
There are a few scenarios where the recovery workflow will not function properly and is not supported: