Playing with Open vStorage and Docker

I was looking for a way to play with Open vStorage on my laptop with as ultimate goal letting people easily experience Open vStorage without having to rack a whole cluster. The idea of running Open vStorage inside Docker, the open container platform, sounded pretty cool so I accepted the challenge to create a hyperconverged setup with docker images. In this blog post I will show you how to build a cluster of 2 nodes running Open vStorage on a single server or laptop. You could also use this dockerised approach if you want to deploy Open vStorage without having to reinstall the server or laptop after playing around with Open vStorage.

As I’m running Windows on my laptop, I started with creating a virtual machine in VirtualBox. The Virtual Machine I created has 4 dynamically allocated disks (OS – 8GB, Cache/DB/scrubber – 100GB and 2 backend disks of 100GB). The VM has 1 network card which is bridged. The exact details of the VM can be found below.
2016-03-31_9-58-42

The steps to run Open vStorage in a Docker container:
Install Ubuntu 14.04 (http://www.ubuntu.com/download/server) in the VM.
Next install Docker:

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo 'deb https://apt.dockerproject.org/repo ubuntu-trusty main' | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update -qq
sudo apt-get purge lxc-docker
sudo apt-get install -y docker-engine

As I wanted to build a multi-container setup, the Open vStorage containers must be able to communicate with each other over the network. To achieve this I decided to use Weave, the open-source multi-host Docker networking project.

sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave

As a next step, download the Open vStorage Docker template and turn it into a Docker image. The Docker image has the latest Open vStorage packages already preloaded for your convenience. Currently the docker image is only hosted on GitHub but it will be pushed to the Docker Hub later on.

wget https://github.com/openvstorage/docker-tools/releases/download/20160325/ovshc_unstable_img.tar.gz
gzip -dc ovshc_unstable_img.tar.gz | sudo docker load

To make things easier we have an ovscluster setup script. This script uses the Docker CLI to create an Open vStorage container based upon the Docker image, joins the container to the Open vStorage cluster and configures the default settings of the container. You can download the cluster setup script from GithHub:

wget https://github.com/openvstorage/docker-tools/raw/master/hyperconverged/ovscluster.sh
chmod +x ovscluster.sh

Get the Weave network up and running. Naturally a free IP range for the weave network is required. In this case I’m

using 10.250.0.0/16.
sudo weave launch --ipalloc-range 10.250.0.0/16

Now use the cluster setup script to create the cluster and add the first Open vStorage container. I’m using ovshc1 as hostname for the container and 10.250.0.1 as IP.

sudo ./ovscluster.sh create ovshc1 10.250.0.1/16

Once the first host of the Open vStorage cluster is created, you will get a bash shell inside the container. Do not exit the shell as otherwise the container will be removed.
2016-03-31_11-02-55
That is all it takes to get Open vStorage container up and running!

Time to create a Backend, vPool and vDisk
Since the Open vStorage container is now fully functional, it is time to do something with it: create a vDisks on it. But, fefore the container can handle vDisks, it needs a backend to store the the actual data. As backend I’ll create an ALBA backend, the native backend, in Open vStorage on top of 2 additional Virtual Machine disks.

Surf with your browser to the Open vStorage GUI at https:// and log in with the default login and password (admin/admin).

As a first step it is required to assign a read, write, DB and scrubber role to at least one of the disks of the container. In production always use an SSD for the DB and cache roles. The scrubber can be on a SATA drive. Select Storage Routers from the top menu. Select ovshc1.weave.local to see the details of the Storage Router and select the Physical disk tab. Select the gear icon of the first 100GB disk.
2016-03-31_11-06-33
Assign the read, write, DB and scrubbing role and click Finish.
2016-03-31_11-09-13
Wait until the partitions are configured (this could take upto 30 seconds).
2016-03-31_11-11-10

Once these basic roles are assigned, it is time to create a new backend and assign the 2 additional virtual machine disks as ASDs (ALBA Storage Daemon) to that backend.

Select Backends from the top menu and click Add Backend. Give the new backend a name and click Finish.
2016-03-31_11-13-01
Wait until the newly created backend becomes available (status will be green) and you see the disks of ovshc1.weave.local.
2016-03-31_11-22-00
Initialize the first 2 disks by clicking the arrow and select Initialize.
2016-03-31_11-23-59
Wait until the status of both disks turns dark blue and claim them for the backend.
2016-03-31_11-26-46
Select the presets tab and select Add preset.
2016-03-31_11-30-56
Give the new preset a name and select advanced settings and indicate you understand the risks of specifying a custom policy. Add a policy with k=1, m=0, c=1 and x=1 (store data on a single disk without any parity fragments on other disks – NEVER user this policy in production!).
2016-03-31_11-49-58
Click Next and Finish.

On top of that backend a vpool is required (like a datastore in VMware) before a vDisk can be created. Select vPools from the top menu and click Add new vPool. Give the vpool a name and click the Reload button to load the backend. Select the newly created preset and click Next.
2016-03-31_11-53-49

Leave the settings on the second screen of the wizard to the default ones and click Next. Set the read cache to 10GB and the write cache to 10GB. Click Next, Next and Finish to create the vPool.

If all goes well, on ovshc1, you will have one disk assigned for the DB/Cache/Scrubber roles for internal use, two disks assigned to ALBA, and one vPool exported for consumption by the cluster.

root@ovshc1:~# df -h
/dev/sdb1 63G 1.1G 59G 2% /mnt/hdd1
/dev/sdc1 64G 42M 64G 1% /mnt/alba-asd/IoeSU0gwIFk591fX9MqZ15CuIx8uMWV2
/dev/sdd1 64G 42M 64G 1% /mnt/alba-asd/Acxydl4BszLlppGKwvWCL2c2Jw7dTWW0
601a3b34-9426-4dc5-9c35-84fac81b42b6 64T 0 64T 0% /exports/vpool

The same vpool can be seen on the VM as follows:

$> df -h
601a3b34-9426-4dc5-9c35-84fac81b42b6 64T 0 64T 0% /mnt/ovs/vpool

Time to create that vDisk. Open a new session to the VM and goto /mnt/ovs/vpool (replace vpool by the name of your vpool). To see if the vPool is fully functioning create .raw disk, the raw disk format used by KVM, and put some load on the disk with fio.Check the Open vStorage GUI to see its perfromance!

sudo truncate -s 123G /mnt/ovs/vpool/diskname.raw
sudo apt-get install fio
fio -name=temp-fio --bs=4k --ioengine=libaio --iodepth=64 --size=1G --direct=1 --rw=randread --numjobs=12 --time_based --runtime=60 --group_reporting --filename=/mnt/ovs/vpool/diskname.raw

Adding a second host to the cluster
To add a second Open vStorage container to the cluster, create another VM with the same specs. Install Ubuntu, Docker and Weave. Just like for the first container download the Docker image and cluster script.

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo 'deb https://apt.dockerproject.org/repo ubuntu-trusty main' | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update -qq
sudo apt-get purge lxc-docker
sudo apt-get install -y docker-engine
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
wget https://github.com/openvstorage/docker-tools/releases/download/20160325/ovshc_unstable_img.tar.gz
gzip -dc ovshc_unstable_img.tar.gz | docker load
wget https://github.com/openvstorage/docker-tools/raw/master/hyperconverged/ovscluster.sh
chmod +x ovscluster.sh

To launch and join a second Open vStorage container execute:

sudo weave launch --ipalloc-range 10.250.0.0/16
sudo ./ovscluster.sh join ovshc2 10.250.0.2/16