Heat Orchestration to automate Swift Installation on a Block Storage Device – Part 2

The manual Swift deployment process is very onerous – too many things have to be handled at the same time and too many configuration parameters. Over the course of two notes we are trying to talk through how to install Swift using OpenStack Heat orchestration for a Flash Storage Array (backend store) supporting iSCSI. In the previous part of this note, part 1,  we had defined some of the basic terms and the context they will be used in, some features the script should possess and a look at the Installer. In this part we will look at the details of the Implementation.

Implementation details

In this section, we will list the step by step code flow of the installer. set_up_logging()

  • Logging feature to dump all logs.
  • Logs contain DEBUG/INFO/WARNING/ERROR/CRITICAL messages for troubleshooting.
  •  Displays in below format:
  •  Timestamp LevelName Handler [-] Message ProcessId FuncName File:LineNo
  •  2014-01-21 18:17:17,456 INFO swift [-] A Flash Storage Array Swift Installation version 1.0 from (pid=28851) <module> install-swift.py:635
  • Installer logs go into logs/install-2014-01-27-15:30:37.log and are dumped as per timestamp when script was invoked.

dump_nova_console_logs(node)

  • Logs are generated in nova-console-logs/ directory.
  •  Dumps the console logs of all the VMs (staging nodes and number of proxies and storage specified).
  •  Log name of the staging VMs is <Instance name>hyphen<Instance ID>Log name for the swift machines are as <stack name>underscore<P proxy_number><instance id>, similar for storage nodes.
  •  PROXY_NODE-db5637a8-a498-4eba-a2a5-2bfa1ee94f36
  •  STORAGE_NODE-f50f8b5f-7302-49e6-80f7-e494c74c468e
  •  swift_P1-ff855078-2ad1-47ba-bd33-7d7be2de7372
  •  swift_S1-d9d3e79b-8cc8-468b-bfa9-f04dd594287e

validate_user_config()

  • Check the sanity of the config-swift.yaml file. This is required as a pre-check for the user provided values, whether they are in correct format (indentation, bogus values, missing fields etc.)

delete_staging_nodes()

  • Cleanup previous corrupt installation of staging nodes(if any)

input_stack_name()

  • Prompt user for the stack name to provision swift nodes. If stack name entered is already present, then ask user to either overwrite it (deletes previous) or specify another stack name (creates new stack).

check_for_setup_error()

  • Check for corrupt installation contents. If yes, then prompt to verify missing files.

(storage_array, proxy_array) = auto_generate_template()

  • Generate a valid heat template based on the fields provided in config-swift.yaml
  •  This will be used subsequently to provision the swift nodes.
  •  Create storage and proxy arrays which contain dictionary elements as key: value pairs. These are used later on for further operations and contain the node details.
  •  The dictionaries contain id, name, state, IP, iscsi_portal, iscsi_targets of the nodes.

does_snapshot_exist()

  • If snapshots of staging nodes are present skip their creation and save user time
  •  Else, create the storage and proxy staging nodes (installs swift packages) and creates and uses snapshots for further provisioning.
  •  Copy configuration scripts required for storage and proxy.
  •  Creating snapshots take time, so we need to wait till this is done. Handled in wait_for_node_image(image_name). Queries the openstack glance and nova services and continuously checks it status, when image in ‘active’ state, gives a go-ahead.

provision_instances()

  • Start the provisioning of the actual swift nodes.
  •  Now, the provisioning of 1 instance takes time as the machines first boot up and then the heat scripts execute, so we need to wait for the node provisioning.
  •  This is handled in wait_for_node_provision(node) function, if there is some error in provisioning instance (IP resolution error, package not installed properly etc.) then dump the logs, cleanup the corrupt nodes and abort script.
  •  The instances are spawned from the snapshot images. The heat provisioning fires the pre-copied configuration script and on completion of provision, the nodes contain all the required conf files for swift to run.

get_device_name_from_iqn(node)

  • Retrieve the mounted iscsi targets on the storage nodes, and pass them to the ring builder script function

master_proxy_node = proxy_instances[0]

  • If there are more than one proxies specified, then we need to update the memcache servers in each of the proxy-server.conf files on the proxy nodes.
  •  update_memcache_servers(proxy_instances) function changes memcache server entry in proxy server conf file on all the proxy nodes.

generate_script_ring_builder(storage_instances, proxy_instances, master_proxy_node)

  • Auto generate the ring builder script which picks up the storage nodes which are up and running and includes them in the ring building phase, if node is down due to some issues (network connectivity, node failure, disk failure etc.), skip that particular node.

 execute_ring_builder(master_proxy_node)

  • Execute the ring builder script from the master proxy node. If there are 2 proxies, then the first is taken as the master proxy node by default.

start_proxy_service(proxy_instances)

  • Start the proxy services on all the proxy nodes (which are up and running).

start_storage_service(storage_instances)

  • Start the storage services on all the storage nodes (which are up and running).

check_swift_sanity(node)

  • Check upload/ download of /etc/swift/proxy-server.conf file by creating a container named ‘builders’ on the master proxy node.
  • Check the MD5 hashes of the uploaded and downloaded file. If they are equal then swift sanity is verified

So there you have it – our thoughts on how the installation of Swift using OpenStack Heat orchestration for a Flash Storage Array can be achieved easier than the manual installation. Feel free to try this on the condition that you come back here and let us know how it went and anything new that you learnt. Future enhancement: One should note that the current heat automated swift installer does not handle events such as CPU and memory overload. Events such as node failures in the main heat stack may cause the swift nodes(both storage and proxy) to be unreachable. Data loss is also a far-fetched possibility if  Swift is unable to re-balance the rings as and when nodes timeout. Solution: The most apt answer to this is using Openstack Ceilometer and heat in combination. Ceilometer is a tool that collects usage and performance metrics from the VMs spawned. This can be handled gracefully by using heat auto upscaling when workload increases and downscaling when nodes are idle. Swift has its own ring building algorithm which will ensure the ring is always balanced. Ceilometer alarming API will access the performance and data metrics(usage, memory, network consumption) of the nova VMs and trigger an appropriate up/down scale of the heat stack via REST calls. So, all we need to do is, provide a heat YAML template to ‘heat update-stack’ and add an autoscaling group policy in the user metadata. The OS::Ceilometer::Alarm resource type can be used to determine scaling up if the average CPU load is greater than 50 % for 1 minute. Similarly, we can scale down if the average load is less than 15 %. Thus, using Heat orchestration and Ceilometer alarming API, we can be sure that our Swift nodes and data are highly available(triplicated storage) at all times. About the Author: Prathamesh Deshpande  is a technology buff with a sound understanding about SAN/NAS storage stack(Operating system, File System, Protocols) with development experience on IaaS, PaaS, cloud storage technologies like OpenStack Swift/Cinder & Ubuntu MaaS. He likes deep-dive explorations in all emerging tech trends and actively participates in architectural discussions

About krenoadmin