Heat Orchestration to automate Swift Installation on a Block Storage Device – Part 1

The OpenStack Orchestration program is designed to create a human & machine accessible service for managing the infrastructure and applications lifecycle within the OpenStack clouds. Heat is the main project within the program. The manual Swift deployment process is very onerous – too many things have to be handled at the same time and too many configuration parameters. In this case Heat becomes the conductor of the orchestra.

In the following two parts of this note we will talk through how to install Swift using OpenStack Heat orchestration for a Flash Storage Array (backend store) supporting iSCSI.

In this first part before getting into the architecture and the implementation let us take this opportunity to define some of the basic terms and the context they will be used in. We will then try to set some objectives for the features the script should possess and also look at the Installer.

(For those familiar with the basic architecture and terms can skip this para and move directly to the Walkthrough of the Installer)

Basic architecture and terms

  •  Node – a host machine running one or more Swift services
  •  Ring – a set of mappings of Swift data to physical devices
  • Proxy node – Runs the swift-proxy-server processes which proxy requests to the appropriate Storage nodes. The proxy server will also contain the TempAuth service as WSGI middleware.
  • Storage node – Runs the swift-account-server, swift-container-server, and swift-object-server processes which control storage of the account databases, the container databases, as well as the actual stored objects.
  • Zone – A zone is a group of nodes that is as isolated as possible from other nodes (separate servers, network, power, even geography).
  • Controller Node – A node that runs network, volume, API, scheduler and image services. Each service may be broken out into separate nodes for scalability or availability.
  • Heat Stack – An integrated project that aims to orchestrate multiple cloud applications for OpenStack.
  • Glance – An OpenStack core project that provides discovery, registration, and delivery services for disk and server images. The project name of the Image Service is glance.
  • Snapshot – A point-in-time copy of an OpenStack storage volume or image. Use storage volume snapshots to back up volumes. Use image snapshots to back up data, or as “gold” images for additional servers.
  • Flavor – Alternative term for a VM instance type.
  • Ring builder – Builds and manages rings within Object Storage, assigns partitions to devices, and pushes the configuration to other storage nodes.
  • Nova – OpenStack project that provides compute services.
  • Keypair – Used along with an EC2 secret key to access the Compute EC2 API.
  • Dashboard – The web-based management interface for OpenStack. An alternative name for horizon.

In this note each Storage node is described as a separate zone in the ring. It is recommended to have a minimum of 5 zones. The ring guarantees that every replica is stored in a separate zone.

The script should implement the following features –

  •  Swift installation
  • Proxy node configuration
  • Storage node configuration
  • Run multiple proxy and storage in tandem
  • Ring balancing
  • Check file upload/download

Walkthrough of the Installer

We have outlined this into 14 steps for ease when installing

  1.  First, the script does a quick check on the contents of the package (for corrupt media and missing files).
  2. Then it waits for the stack name which is the main heat stack for spawning proxy and storage nodes.
  3. An ‘auto.yaml’ file is generated which is the main file for launching nodes. The nodes are then launched as <stack name>_P1 and <stack name>_S1. So, if you specify stack name as ‘stack1’, then your nodes will be stack1_P1, stack1_P2, stack1_S1, stack1_S2.
  4. After this, the script should run on its own – just sit back and relax.
  5. It checks for snapshots of staging nodes, if present then it should pick up the snapshot image_id of the proxy and storage staging nodes from glance and update the auto-generated final yaml template.
  6. Whereas, if snapshots are not present (for eg. a first installation or a user has deleted), then it launches the staging nodes STORAGE_NODE and PROXY_NODE which are for installing swift packages(staging-templates). The image id for staging nodes would be the raw Ubuntu id which user has provided in config-swift.yaml.
  7. The keypair ‘key.pem'(used later on for identification of the nodes) is copied to these staging nodes.
  8. Snapshots are taken and uploaded in glance as STORAGE_SNAPSHOT and PROXY_SNAPSHOT. This is done so that the time required for installing packages for spawning subsequent nodes is saved as the snapshot id’s are picked up directly.
  9. The swift configuration scripts ‘node-configure-scripts’ are pre-copied to these staging nodes before taking snapshots.
  10. Then, deletion of any staging nodes from any previous installations takes place silently with the objective of freeing resources(RAM, disk) as snapshots are taken.
  11. The ‘auto.yaml’ file is generated now and a stack is created which includes the proxy and storage nodes. At this point, as the launched nodes have corresponding swift packages(for storage and proxy) pre-installed, only the configuration scripts are run after the nodes are booted up.
  12. As all proxy and storage nodes are spawned, the ring builder script is auto-generated. It contains swift ring builder configurations for all storage nodes(which are up and running). iSCSI devices are exposed from the storage nodes and the ring building algorithm starts.
  13. The rings are created on the proxy node and copied over to all storage nodes and also other proxy nodes if more than 2 proxies are specified.
  14. Services are started on all nodes and a sample upload/download of objects is shown, for checking sanity.

In the next part of this note, Part 2, we will look at the actual implementation in detail. Till then do share your own experiences of Swift installation – manual and/or otherwise?

About the Author: Prathamesh Deshpande  is a technology buff with a sound understanding about SAN/NAS storage stack(Operating system, File System, Protocols) with development experience on IaaS, PaaS, cloud storage technologies like OpenStack Swift/Cinder & Ubuntu MaaS. He likes deep-dive explorations in all emerging tech trends and actively participates in architectural discussions

About krenoadmin