Distributed Config Management

Distributed Config ManagementWhen you are managing large clusters, keeping the configuration of every system up to date can be quite a challenge: new nodes are joining the cluster, old nodes need to be replaced, vPools are created and removed, … . In Eugene and earlier versions we relied on simple config files which were located on each node. It should not come as a surprise that in large clusters it proved to be a challenge to keep the config files in sync. Sometime a clusterwide config parameter was updated while one of the nodes was being rebooted. This had as consequence that the update didn’t make it to the node and after the reboot it kept running with an old config.
For Fargo we decided to tackle this problem. The answer: Distributed Config Management.

Distributed Config Management

All config files are now stored in a distributed config management system. When a component starts, it now retrieves the latest configuration settings from the management system. Let’s have a look at how this works in practice. For example a node is down and we remove the vPool from that node. As the vPool was shrunk, the config for that VolumeDriver is removed from the config management system. When the node restarts it will try to get the latest configuration settings for the vPool from the config management system. As there is no config for the removed vPool, the VolumeDriver will no longer serve the vPool. In a first phase we have added support for Arakoon, our beloved and in-house developed distributed key/value store, as distributed config management system. As an alternative to Arakoon, ETCD has been incorporated but do know that in our own deployments we always use Arakoon (hint).

How to change a config parameter:

Changing parameters in the config management system is very easy through the Open vStorage CLI:

  • ovs config list some: List all keys with the given prefix.
  • ovs config edit some-key: Edit that key in your configured editor. If the key doesn’t exist, it will get created.
  • ovs config get some-key: Print the content of the given key.

The distributed config management also contains a key for all scheduled tasks and jobs. To update the default schedule, edit the key /ovs/framework/scheduling/celery and plan the tasks by adding a crontab style schedule.

I like to move it, move it

The vibe at the Open vStorage office is these days best explained by a song of the early nineties:

I like to move it, move it ~ Reel 2 Reel

While the summer time is in most companies a more quiet time, the Open vStorage office is buzzing like a beehive. Allow me to give you a short overview of what is happening:

  • We are moving into our new, larger and stylish offices. The address remains the same but we are moving into a completely remodeled floor of the Idola business center.
  • Next to physically moving desks at the Open vStorage HQ, we are also moving our code from BitBucket to GitHub. We have centralized all our code under https://github.com/openvstorage. To list a few of the projects: Arakoon (our consistent distributed key-value store), ALBA (the Open vStorage default ALternate BAckend) and of course Open vStorage itself. Go check it out!
  • Finishing up our Open vStorage 2.2 GA release.
  • Adding support for RedHat and Cent OS by merging in the Cent-OS branch. There is still some work to do around packaging, testing and upgrades so feel free to give a hand. As this was really a community effort, we owe everyone a big thank you.
  • Working on some very cool features (RDMA anyone?) but let’s keep those for a separate post.
  • Preparation for VMworld (San Francisco) and the OpenStack summit in Tokyo.

As you can see, many things going on at once so prepare for a hot Open vStorage fall!

Open vStorage 1.0.3 Beta

We couldn’t wait until the end of April before pushing out our next release. That is why the Open vStorage Team decided to release today (April 17 2014) a small version with following set of features:

  • Remove a vPool through the GUI: in version 1.0.2 it was already possible to create a vPool and extend the vPool across Hosts through the GUI. To make the whole vPool management process complete and more user friendly, we have added the option to remove a vPool from a Host. This is of course only possible in case there are no vDisks being served from the vPool to a vMachine on that Host.
  • New version of Arakoon: Arakoon 1.7, the distributed key-value store that guarantees consistency above anything else, integrated in Open vStorage had some major rework done. This release also fixes bugs we encountered.
  • Some small improvements include adding an NTP server by default to the Host and displaying the different used ports on the VSA detail page.

The version also includes bug fixes:

  • Issue with connecting to SAIO via S3.
  • Footer displaying incorrect values.
  • Update sshd configuration to avoid client to send its own specific LOCALE settings.
  • Shutting down first node in 4 node setup caused ovs_consumer_volumerouter to halt on node 2.
  • KVM clones are now no longer automatically started.
  • Open vStorage GUI sometimes didn’t list a second VM.
  • Clone from vTemplate on KVM does not work in the absence of “default” network on Archipel.
  • Restrict vTemplate deletion from the Open vStorage GUI when a child vMachine is present.
  • Add vpool without copying ceph.* files now gives error.

To give this version a try, download the software and install with Quality Level Test.