QEMU, Shared Memory and Open vStorage, it sounds like the beginning of a bad joke but actually it is a very cool story. Open vStorage secretly released in their latest version a Shared Memory Client/Server integration with the VolumeDriver (the component that offers the fast, distributed block layer). With this implementation the client (QEMU, Blktap, …) can write to a dedicated memory segment on the compute host which is shared with the Shared Memory Server in the Volume Driver. For the moment the Shared Memory client understands only block semantics but in the future we will add file semantics as to integrate an NFS server.
The benefits of the Shared Memory approach are very tangible:
- As everything is in user-space, data copies from user to kernel space are eliminated so the IO performance is about 30-40% higher.
- CPU consumption is about half for the same IO performance.
- Easy way to build additional interfaces (f.e. block devices, iSCSI, … ) on top.
We haven’t integrated our modified QEMU build with Libvirt so at the moment some manual tweaking is still required if you want to give it a go:
Download the volumedriver-dev packages
sudo apt-get install volumedriver-dev
By default the Shared memory Server is disabled. To enable it, update the vPool json (/opt/OpenvStorage/config/storagedriver/storagedriver/vpool_name.json) and add under filesystem an entry “fs_enable_shm_interface”: true,. After adding the entry, restart the Volume Driver for the vPool (restart ovs-volumedriver_vpool_name).
Next, build QEMU from the source. You can find the source here.
git clone https://github.com/openvstorage/qemu.git
sudo make install
There are 2 ways to create a QEMU vDisk:
Use QEMU to create the disk:
qemu-image create openvstorage:volume 10G
Alternatively create the disk in FUSE and start a VM by using the Open vStorage block driver:
truncate -s 10G /mnt/
qemu -drive file=openvstorage:volume,if=virtio,cache=none,format=raw ...