The Edge, a lightweight block device

edge block storageWhen I present the new Open vStorage architecture for Fargo, I almost always receive the following Edge question:

What is the Edge and why did you develop it?

What is the Edge about?

The Edge is a lightweight software component which can be installed on a Linux host. It exposes a block device API and connects to the Storage Router across the network (TCP/IP or RDMA). Basically the applications believes it talks to a local block device (the Edge) while the volume actually runs on another host (Storage Router).

Why did we develop the Edge?

The reason why we have developed the Edge is quite simple: componentization. With Open vStorage we are mainly dealing with large, multi-petabyte deployments and having this Edge component gives additional benefits in large environments:

Scalability

In large environments you want to be able to scale the compute and storage part independently. In case you run Open vStorage hyper-converged, as advised with earlier versions, this isn’t possible. This has as consequence that if you need more RAM or CPU to run VMs, you had to also invest in more SSDs. With the Edge you can scale compute and storage independent.

Guaranteed performance

With Eugene the Volume Driver, the high performance distributed block layer, was running on the compute host together with the VMs. This results in the VMs and the Volume Driver fighting for the same CPU and RAM resources. This is a typical issue with hyper-converged solutions. The Edge component avoids this problem as it runs on the compute hosts (and requires only a small amount of resources) and the Volume Drivers runs on dedicated nodes and hence provides a predictable and consistent amount of IOPS to the VMs.

Limit the Impact of Updates

Storage software updates are a (storage) administrator’s worst nightmare. In previous Open vStorage versions an update of the Volume Driver required all VMs on that node to be migrated or brought down.With the Edge the Volume Driver can be updated in the background as each Edge/compute host has HA features and can automatically connect to another Volume Driver on request without the need of a VM migration.

Open vStorage 2.2 alpha 2

As promised in our latest release update we would do more frequent releases. Et voilà , today we release a new alpha version of the upcoming GA release. If possible, we will provide new versions from now on as an update so that you don’t have to reinstall your Open vStorage environment. As this new version is the first one with the update/upgrade functionality, there is no update possible between alpha 1 and alpha 2.

What is new in Open vStorage 2.2 alpha 2:

  • Upgrade functionality: Under the Administration section you can check for updates of the Framework, the Volumedriver and the Open vStorage Backend and apply them. For the moment an update might require all VM’s to be shutdown.
  • Support for non-identical hardware layouts: You can now mix hardware which doesn’t have the same amount of SSDs or PCI-e flash cards. When extending a vPool to a new Storage Router you can select which devices to use as cache.

Small Features:

  • The Backend policy which defines how SCOs are stored on the Backend can now be changed. The wizard is straightforward in case you want to set for example 3-way replication.
  • Rebalancing of the Open vStorage Backend, moving data from disks which are almost full to new disks to make sure all disks are evenly used, is now a service which can be enabled and disabled.
  • Audit trails are no longer stored in the model but in a log.

Bug fixes:

  • ASD raises time out or stops under heavy load.
  • Extening a vPool to a new Storage Router doesn’t require the existing vMachines on the vPool to be stopped.
  • FailOverCache does not exit but hangs in accept in some cases.
  • Remove vpool raises cluster not reachable exception.
  • Add logrotate entry for /var/log/arakoon/*/*.log
  • Error in vMachine detail (refreshVDisks is not found).
  • Arakoon 1.8.4 rpm doesn’t work on CentOS7.
  • Arakoon catchup is quite slow.
  • The combination Openstack with multiple vPools and live migration does not properly work in some cases.