Open vStorage 2.2 alpha 6

Today we released Open vStorage 2.2 alpha 6. Like the alpha 4 this is a bugfix release.

Bugs that were fixed as part of alpha 6:

  • If required, Open vStorage updates the Cinder driver to be compatible with the installed Open vStorage version.
  • Various issue where the ALBA proxy might get stuck.
  • Issue where the ALBA proxy fails to add fragments to the fragment cache.
  • Arakoon cluster remains masterless in some cases.
  • Various issues with the updater.
  • Remove vPool leaves MDS files.
  • An xml file visible through FUSE isn’t accessible but it is available on the backend.
  • While the extending vPool task is running, the vPool can’t be added or removed from other Storage Routers in the GUI.
  • Timeout for backend syncs and migration is added to ensure more graceful handling of live migration failures.

Open vStorage 2.2 alpha 4

We released Open vStorage 2.2 Alpha 4 which contains following bugfixes:

  • Update of the About section under Administration.
  • Open vStorage Backend detail page hangs in some cases.
  • Various bugfixes for the use case when adding a vPool with a vPool name which was previously used.
  • Hardening the vPool removal.
  • Fix daily scrubbing not running.
  • No log output from the scrubber.
  • Failing to create a vDisk from a snapshot tries to delete the snapshot.
  • ALBA discovery starts spinning if network is not available.
  • ASD is no longer used by the proxy even after it has been requalified.
  • Type checking through Descriptor doesn’t work consistently.

Open vStorage 2.2 alpha 3

Today we released Open vStorage 2.2 alpha 3. The only new features are on the Open vStorage Backend (ALBA) front:

  • Metadata is now stored with a higher protection level.
  • The protocol of the ASD is now more flexible in the light of future changes.

Bugfixes:

  • Make it mandatory to configure both read- and writecache during the ovs setup partitioner.
  • During add_vpool on devstack, the cinder.conf is updated with notification_driver which is incorrectly set as “nova.openstack.common.notifier.rpc_notifier” for Juno.
  • Added support for more physical disk configuration layouts.
  • ClusterNotReachableException during vPool changes.
  • Cannot extend vPool with volumes running.
  • Update button clickable when an update is ongoing.
  • Already configured storage nodes are now removed from the discovered ones.
  • Fix for ASDs which don’t start.
  • Issue where a slow long-running task could fail because of a timeout.
  • Message delivery from albamgr to nsm_host can get stuck.
  • Fix for ALBA Namespace doesn’t exists while it exists.

Open vStorage 2.2 alpha 2

As promised in our latest release update we would do more frequent releases. Et voil√† , today we release a new alpha version of the upcoming GA release. If possible, we will provide new versions from now on as an update so that you don’t have to reinstall your Open vStorage environment. As this new version is the first one with the update/upgrade functionality, there is no update possible between alpha 1 and alpha 2.

What is new in Open vStorage 2.2 alpha 2:

  • Upgrade functionality: Under the Administration section you can check for updates of the Framework, the Volumedriver and the Open vStorage Backend and apply them. For the moment an update might require all VM’s to be shutdown.
  • Support for non-identical hardware layouts: You can now mix hardware which doesn’t have the same amount of SSDs or PCI-e flash cards. When extending a vPool to a new Storage Router you can select which devices to use as cache.

Small Features:

  • The Backend policy which defines how SCOs are stored on the Backend can now be changed. The wizard is straightforward in case you want to set for example 3-way replication.
  • Rebalancing of the Open vStorage Backend, moving data from disks which are almost full to new disks to make sure all disks are evenly used, is now a service which can be enabled and disabled.
  • Audit trails are no longer stored in the model but in a log.

Bug fixes:

  • ASD raises time out or stops under heavy load.
  • Extening a vPool to a new Storage Router doesn’t require the existing vMachines on the vPool to be stopped.
  • FailOverCache does not exit but hangs in accept in some cases.
  • Remove vpool raises cluster not reachable exception.
  • Add logrotate entry for /var/log/arakoon/*/*.log
  • Error in vMachine detail (refreshVDisks is not found).
  • Arakoon 1.8.4 rpm doesn’t work on CentOS7.
  • Arakoon catchup is quite slow.
  • The combination Openstack with multiple vPools and live migration does not properly work in some cases.