Today the Open vStorage team presents the Denver release of Open vStorage. We are working on some big StoryCards (replication, QEMU/iSCSI/block interface) which can’t be developed in a single sprint. This means the Denver release doesn’t contain many new fancy features but we will make up for it in the next releases.
The highlights of this release are:
- New repo structure: The new repo location is apt.openvstorage.org (deb) and yum.openvstorage.org (rpm). Note that we now only support name based releases and no longer have different quality levels (alpha, beta, …). The latest changes can be found on unstable.
- Failure Domains: The Failure Domains functionality allows to group Storage Routers together based upon their location and allows to configure the MetaDataServer based upon this grouping. In the next release the Failure Domain configuration will also take the Distributed Transaction Log (DTL) into account. This will allow to make sure that when there is a site disaster, the data in the DTL can be safeguarded to prevent dataloss.
- The Open vStorage updater now also supports rpms.
- An option was added to specify generic configuration parameters for ASDs f.e. the multicast period.
The Denver release also contains following bugfixes:
- Failure to start the distributed backend due to config files not being distributed correctly.
- Bug which prevented to remove ASDs from a node when the node was unavailable.
- Bugfix for incorrect locking of the vDisks events.
- The VMware deploy script now adds all disks to the Storage Router VM.
- Sync_with_mgmtcenter logs ERROR when the vdisk is not attached to a VM.
- Update of ovs-snmp to make sure Open vStorage Backends can be monitored.
During the summer the Open vStorage Team has worked very hard. With this new release we can proudly present:
- Feature complete OpenStack Cinder Plugin: our Cinder Plugin has been improved and meets the minimum features required to be certified by the OpenStack community.
- Flexible cache layout: in 1.5.0 you have the capability to easily configure multiple SSD devices for the different caching purposes. During the setup you can choose which SSD’s to partition and then later when creating a vPool you can select which caching device should be used for read, write and writecache protection. Meaning these can from now on be spread over different or consolidated into the same SSD device depending on the available hardware and needs.
- User management: an admin can now create more user which have access to the GUI.
- Framework performance: a lot of work has been put into improving the performance when a lot of vDisks and vMachines are created. Improvements upto 50% has been reached in some cases.
- Improved API security by means of implementing OAuth2 authentication. A rate-limit has also been imposed on API calls to prevent brute force attacks.
Fixed bugs and small items:
- GUI now prevents creation of vPools with a capital letter.
- Implemented recommendation for a security exploit on elasticsearch 1.1.1.
- Fix for validation of vPools being stuck on validating.
- Protection against reusing vPool names towards the same backend.
- Fix for the Storage Router online/offline detection which failed when openstack was also installed.
Next, we also took the first step towards supporting other OS than Ubuntu (RedHat/Centos). We have created an rpm version of our volumedriver and arakoon packages. These are tested on “Linux Centos7 3.10.0-123.el7.x86_64” and can be downloaded from our packages server. This completes a first important step towards getting Open vStorage RedHat/CentOS compatible.