vDisks, vMachines, vPools and Backends: how does it all fit together

With the latest version of Open vStorage, we released the option to use physical SATA disks as storage backend for Open vStorage. These disks can be inside the hypervisor host, hyper-converged, or in a storage server, an x86 server with SATA disks*. Together with this functionality we introduced some new terminology so we thought it would be a good idea to give an overview of how it all fits together.

vPool - Backend

Let’s start from the bottom, the physical layer, and work up to the virtual layer. For the sake of simplicity we assume a host has one or more SATA drives. With the hyper-converged version (openvstorage-hc) you can unlock functionality to manage these drives and assign them to a Backend. Basically a Backend is nothing more than a collection of physical disks grouped together. These disks can even be spread across multiple hosts. The implementation even gives you the freedom to assign some physical disks in a host to one Backend and the other disks to a second Backend. The only limitation is that a disk can only be assigned to a single Backend at the same time. The benefit of having this Backend concept allows to separate customers even on physical disk level by assigning each of them its own set of hard drives.

On top of a Backend you create one more vPools. You could compare this with creating a LUN on a traditional SAN. The benefit of having the split between a Backend and a vPool allows to have additional flexibility:

  • You can assign a vPool per customer or per department. This means you can for example bill per used GB.
  • You can set a different encryption passphrase per vPool.
  • You can enable or disable compression per vPool.
  • You can set the replication factor per vPool. For example a “test” vPool, where you store only a single copy of the Storage Container Objects (SCOs) but for production servers you can configure 3-way replication.

On top of the vPool, configured as a Datastore in VMware or a mountpoint in KVM, you can create vMachines. Each vMachine can have multiple vDisks, Virtual disks. For now vmdk and raw files are supported.

*A quick how-to convert a server with only SATA drives into a storage server you can use as Backend:
On the hypervisor host:

Install Open vStorage (apt-get install openvstorage-hc) as explained in the documentation and run the ovs setup command.

On the storage server:

  • The storage server must have an OS disk and at least 3 empty disks for the storage backend.
  • Setup the storage server with Ubuntu LTS 14.04.2. Make sure the storage server is in the same network as the compute host.
  • Execute following to setup the program that will manage the disks of the storage server:
    echo "deb http://apt-ovs.cloudfounders.com beta/" > /etc/apt/sources.list.d/ovsaptrepo.list
    apt-get update
    apt-get install openvstorage-sdm
  • Retrieve the automatically generated password from the config:
    cat /opt/alba-asdmanager/config/config.json
    ...
    "password": "u9Q4pQ76e0VJVgxm6hasdfasdfdd",
    ...

On the hypervisor host:

  • Login and go to Backends.
  • Click the add Backend button, specify a name and select Open vStorage Backend as type. Click Finish and wait until the status becomes green.
  • In the Backend details page of the freshly created Backend, click the Discover button. Wait until the storage server pops up and click on the + icon to add it. When asked for credentials use root as login and the password retrieved above.
  • Next follow the standard procedure to claim the disks of the storage server and add them to the Backend.

Open vStorage 2.1

It is with great pleasure I introduce Open vStorage 2.1. Yes, we went straight from version 1.6 to 2.1. We just had so much interesting features to add that we just couldn’t call it 2.0.

It is important to know that Open vStorage now comes in 2 flavors: a free, unrestricted version and a free, restricted community version which includes our own new Open vStorage backend and allows to run Open vStorage as hyperconverged solution. At the moment both versions only feature community support. The unrestricted version is open-source and allows to add almost any S3 compatible backend (Ceph, Swift, Cloudian, …). The community version is the restricted version of our future paying product which includes support. A paying Open vStorage version will be released in June. In case you want to run Open vStorage hyperconverged out of the box, you will need to have the Open vStorage Backend which is highly optimized to be used with Open vStorage.

So what is new in 2.1 compared to 1.6:

  • Run Open vStorage as hyperconverged solution: you can now use local SATA disks inside the host as (cold) storage backend for data coming out of the write cache. Open vStorage is now hyperconverged and supports hot-swap disks. For our free community edition you can go upto 4 hosts, 16 disks and 49 vDisks. Currently only a limited set of RAID controllers are supported (LSI). In case you want to use Open vStorage in combination with the Seagate Kinetic drives, the Open vStorage Backend will also be required (future version).
  • Flexible cache layout: the Open vStorage setup is now more flexible and allows to identify multiple SSDs as read cache device. During the setup you can also indicate which device you want to select as write cache. When you create a vPool this will be taken into account when presenting default values.
  • Improved supportability: you now have the option to send heartbeats to our datacenter and if necessary open a VPN connection so we can offer remote help. There is also an option to download all logs straight from the GUI with a single mouse click.
  • New metadata server: when a volume was moved from one host to another, you typically had a few seconds up to a minute of downtime as the metadata had to be rebuilt on the new host. We now have a metadata server topology which supports a master/slave concept. In case the volume is moved and the master server is no longer accessible you can contact the slave metadata server. This means that the downtime will be only a few milliseconds. (Some more info https://www.youtube.com/watch?v=Yy2EhJkFr04)
  • Performance improvements: we now allow more outstanding data in the write cache before the data ingest coming from the VM will be limited.

In case you have questions, feel free to create a post in the Support Forum.