Eugene Release

To start the new year with a bang, the Open vStorage Team is proud to release Eugene:

The highlights of this release are:

Policy Update
Open vStorage enables you to actively add, remove and update policies for specific ALBA backend presets. Updating active policies might result in Open vStorage to automatically rewrite data fragments.

ALBA Backend Encryption
When configuring a backend presets, AES-256 encryption algorithms can be selected.

Failure Domain
A Failure Domain is a logical grouping of Storage Routers. The Distributed Transaction Log (DTL) and MetaDataServer (MDS) for Storage Router groups can be defined in the same Failure Domain or in a Backup Failure domain. When the DTL and MDS are defined in a Backup Failure Domain, data loss in case of a non-functioning Failure Domain is prevented. Defining the DTL and MDS in a backup Failure Domain requires low latency network connections.

Distributed Scrubber
Snapshots which are out of retention period are indicated as garbage and removed by the Scrubber. With the Distributed Scrubber functionality you can now decide to run the actual scrubbing process away from the host that holds the volume. This way, hosts that are running Virtual Machines do not experience any performance hit when the snapshots of those Virtual Machines are scrubbed.

Scrubbing Parent vDisks
Open vStorage allows to create clones of vDisks. The maximal depth of the clone tree is limited to 255. When a clone is created, scrubbing is still applied to the actual parent of the clone.

New API calls
Following API’s are added:

  • vDisk templates (set and create from template)
  • Create a vDisk (name, size)
  • Clone a vMachine from a vMachine Snapshot
  • Delete a vDisk
  • Delete a vMachine snapshot

These API calls are not exposed in the GUI.

Removal of the community restrictions
The ALBA backend is no longer restricted and you are no longer required to apply for a community license to use ALBA. The cluster needs to be registered within 30 days otherwise the GUI will stop working until the cluster is registered.

Remove Node
Open vStorage allows for nodes to be removed from the Open vStorage Cluster. With this functionality you can remove any node and scale your storage cluster along with your changing storage requirements. Both active and broken nodes can be consistently removed from the cluster.

Some smaller Feature Requests were added also:

  • Removal of the GCC dependency.
  • Option to label a manual snapshot as ‘sticky’ so it doesn’t get removed by the automated snapshot cleanup.
  • Allow stealing of a volume when no Hypervisor Management Center is configured and the node rowning the volume is down.
  • Set_config_params for vDisk no longer requires the old config.
  • Automatically reconfigure the DTL when DTL is degraded.
  • Automatic triggering of a repair job when an ASD is down for 15 minutes.
  • ALBA is independent of broadcasting.
  • Encryption of the ALBA socket communication.
  • New Arakoon client (pyarakoon).

Following are the most important bug fixes in the Eugene release:

  • Fix for various issues when first node is down.
  • “An error occurred while configuring the partition” while trying to assign DB role to a partition.
  • Cached list not updated correctly.
  • Celery workers are unable to start.
  • Nvme drives are not correctly detected.
  • Volume restart fails due to failure while clearing the DTL.
  • Arakoon configs not correct on 4th node.
  • Bad MDS Slave placement.
  • DB role is required on every node running a vPool but isn’t mandatory.
  • Exception in tick crashes the ovs-scheduled-task service.
  • OVS-extensions is very chatty.
  • Voldrv python client hangs if node1 is down.
  • Bad logic to decide when to create extra alba namespace hosts.
  • Timeout of CHAINED ensure single decorator is not high enough.
  • Possibly wrong master selected during “ovs setup demote”.
  • Possible race condition when adding nodes to cluster.
About the Author
Wim Provoost
Product Manager Open vStorage.

Leave a Reply