Docker and persistent Open vStorage volumes

Docker, the open-source container platform, is currently one of the hottest projects in the IT infrastructure business. With support of some of the world’s leading companies such as PayPal, Ebay, General Electric and many more, it is quickly becoming a cornerstone of any large deployment. Next, it also introduces a paradigm shift in how administrators see servers and applications.

Pets vs. cattle

In the past servers were treated like dogs and cats or any family pet: you give it a cute name, make sure it is in optimal condition, take care of it when it is sick, … With VMs a shift already occurred: names became more general like WebServer012 but keeping the VM healthy was still a priority for administrators. With Docker, VMs are decomposed into a sprawl of individual, clearly, well-defined applications. Sometimes there can even be multiple instances of the same application running at the same time. With thousands of containerized applications running on a single platform, it becomes impossible to treat these applications as pets but instead they are treated as cattle: they get an ID, when having issues they are taken off-line, terminated, and replaced.

Docker Storage

The original idea behind Docker was that containers would be stateless and hence didn’t need persistent storage. But over the years the insight has grown that also some applications and hence containers require persistent storage. Since the Docker platforms at large companies are housing thousands of containers, the required storage is also significant. Typically these platforms also span multiple locations or even clouds. Storage across locations and clouds is the sweet spot of the Open vStorage feature set. In order to offer distributed, persistent storage to containers, the Open vStorage team created a Docker plugin on top of the Open vStorage Edge, our lightweight block device. Note that the Docker plugin is part of the Open vStorage Enterprise Edition.

Open vStorage and Docker

Using Open vStorage to provision volumes for Docker is easy and straightforward thanks to Docker’s volume plugin system. To show how easy it is to create a volume for a container, I will give you the steps to run Minio, a minimal , open-source object store, on top of a vDisk.

First install the Open vStorage Docker plugin and the necessary packages on the compute host running Docker:
apt-get install libovsvolumedriver-ee blktap-openvstorage-ee-utils blktap-dkms volumedriver-ee-docker-plugin

Configure the configuration of the plugin by updating /etc/volumedriver-ee-docker-plugin/config.toml

[volumedriver]
hostname="IP"
port=26203
protocol="tcp"
username="root"
password="rooter"

Change the IP and port to the IP on which the vPool is exposed on the Storage Router you want to connect to (see Storage Router detail page).

Start the plugin service
systemctl start volumedriver-ee-docker-plugin.service

Create the Minio container and attach a disk for the data (minio_export) and one for the config (minio_config)

docker run --volume-driver=ovs-volumedriver-ee -p 9000:9000 --name minio \
-v minio_export:/export \
-v minio_config:/root/.minio \
minio/minio server /export

That is it. You now have a Minio object store running which stores its data on Open vStorage.

PS. Want to see more? Check the “Docker fun across Amazon Google and Packet.net”-video

About the Author
Wim Provoost
Product Manager Open vStorage.