Fargo RC3

We released Fargo RC3 . This release focusses on bugfixing (13 bugs fixed) and stability.

Some items where also added to improve the supportability of an Open vStorage cluster:

  • Improved the speed of the non-cached API and GUI queries by a factor 10 to 30.
  • It is now possible to add more NSM clusters to store the data for a backend through an API instead of doing it manually.
  • Blocking to set a clone as template.
  • Hardening the remove node procedure.
  • Removed ETCD support for the config management as it was no longer maintained.
  • Added an indicator in the GUI which displays when a domain is set as recovery domain and not as primary anywhere in the cluster.
  • Support for the removal of the ASD manager.
  • Added a call to list the manually started jobs (f.e. verify namespace) on ALBA.
  • Added a timestamp to list-asds so it can be tracked how long an ASD is already part of the backend.
  • Removed the Volume Driver testing by creating a new volume in the Health Check as it created too many false positives to be used reliable.

Distributed Config Management

Distributed Config ManagementWhen you are managing large clusters, keeping the configuration of every system up to date can be quite a challenge: new nodes are joining the cluster, old nodes need to be replaced, vPools are created and removed, … . In Eugene and earlier versions we relied on simple config files which were located on each node. It should not come as a surprise that in large clusters it proved to be a challenge to keep the config files in sync. Sometime a clusterwide config parameter was updated while one of the nodes was being rebooted. This had as consequence that the update didn’t make it to the node and after the reboot it kept running with an old config.
For Fargo we decided to tackle this problem. The answer: Distributed Config Management.

Distributed Config Management

All config files are now stored in a distributed config management system. When a component starts, it now retrieves the latest configuration settings from the management system. Let’s have a look at how this works in practice. For example a node is down and we remove the vPool from that node. As the vPool was shrunk, the config for that VolumeDriver is removed from the config management system. When the node restarts it will try to get the latest configuration settings for the vPool from the config management system. As there is no config for the removed vPool, the VolumeDriver will no longer serve the vPool. In a first phase we have added support for Arakoon, our beloved and in-house developed distributed key/value store, as distributed config management system. As an alternative to Arakoon, ETCD has been incorporated but do know that in our own deployments we always use Arakoon (hint).

How to change a config parameter:

Changing parameters in the config management system is very easy through the Open vStorage CLI:

  • ovs config list some: List all keys with the given prefix.
  • ovs config edit some-key: Edit that key in your configured editor. If the key doesn’t exist, it will get created.
  • ovs config get some-key: Print the content of the given key.

The distributed config management also contains a key for all scheduled tasks and jobs. To update the default schedule, edit the key /ovs/framework/scheduling/celery and plan the tasks by adding a crontab style schedule.