The World Cup 2014 has started this weekend in sunny Brasil. The first victories have been celebrated, the first dreams smashed to smithereens. The Belgian Team, the Red Devils, still needs to play their first game and the Open vStorage Team has very high expectations. In case you have players like Thibaut Courtois, Eden Hazard and Vincent Kompany in your team, losing is not an option.
But the World Cup can also teach us something about storage. Just like in a football team where the trainer needs to make a decision about which player he puts at which position, an (storage) administrator needs to decide on which storage he stores his Virtual Machines. In football you have different types of players such as a goalkeeper, defender, striker etc, but somehow we have allowed storage for Virtual Machines to be uniform. Especially running all your Virtual Machines from local disks only, accelerated by SSDs seems to be the new mantra. A single unified namespace for all! This makes no sense as different Virtual Machines have different storage requirements. On top, enterprises and service providers have made huge investments in NAS, distributed file systems and object stores that are highly reliable and already in use.
The second problem arises when you as administrator try to mix different storage types inside one environment. With current setups mixing different backends is just not done when running Virtual Machines as this would be a management nightmare. Imagine having a Virtual Machine with one disk on a SAN and another one on local disks. There are different GUI’s to use and maybe even different storage interfaces. This would be like mixing football players and basket players in a single team and force all to follow the same football rules. Open vStorage allows you to have a single pane of glass across all your storage backends, removing the management nightmare. With Open vStorage there is 1 GUI which hides away the complexity of running part of a Virtual Machine on your old SAN and the other part on your new Object Store. Open vStorage acts as frontend to these different backends, and turns these backends immediately into fast storage for virtual machines.
PS: Go Belgium go!
Another month flew by which means the Open vStorage team is very proud to announce Open vStorage 1.2. The new big features for this release are:
- apt-get install openvstorage: during the month of May we have improved our install process, making it much easier to install Open vStorage. The advised way to install Open vStorage is now by using Debian packages. When using Apt-Get, the software can also be installed on Ubuntu Server 14.04 LTS. Stay tuned for more supported OSes in the future.
- SNMP monitoring: in this new release all parameters, ranging from vDisks, vMachines, vPools to the actual Grid Storage Routers of the Open vStorage Cluster can be monitored through SNMP. This means you can now use your favorite monitoring tool (Zenoss, Nagios, Cacti, …) to see real-time statistics and create historic graphs of IOPS, memory usage and many more parameters.
- Improvement in the vStorage Driver to allow clones to be created in parallel from a vTemplate.
This release also contains a lot of bug fixes. The most important fixes are:
- Various bugfixes to harden the rebooting of an Open vStorage Cluster.
- Fix to the Add vPool wizard to make sure it doesn’t get stuck.
- Making the deployOvs.py user messages more clear.
- Fix for a create from vTemplate issue on VMware in case of multiple nodes.
- Removing a vPool is now possible in case there are vMachines on another vPool.
- Incorrect tooltip on the vPool Detail Management tab.
- After deletion of a vTemplate, the corresponding VM is also deleted from the vSphere client and the vPool.
During talks with people who are interested in the Open vStorage concept, we quite often get the following question:
How are you different compared to <insert name of software-defined storage product of the moment>
Well the answer is something which has been the adagio within CloudFounders from day one: Keep IT Simple, Stupid. The KISS principle states that most systems work best if they are kept simple rather than made complicated. This is equally through in the storage industry. For decades we are chasing the utopian dream of Virtual Machines that are always online. I must admit, we are very close but we should look back at the price we are paying: complexity. But don’t be fooled, hand in hand with complexity comes its buddy error.
Within our company we have an average of 15 years’ experience within datacenter and storage environments. The most important thing we learned down the road is that complexity is your worst enemy when things go bad. The second thing we learned is that things will go bad eventually, no matter how prepared you are. Ok, when you have added enough complexity like active-active SANs, firewalls, databases, etc, things will go bad less often but when disaster strikes your downtime will be much more than when you would have kept things simple.
When we designed Open vStorage the KISS principle has always been in the back of our mind. I’d like to give you 2 examples of where we keep things simple while others are making it complicated, expensive and dangerous.
- To support functionality such as vMotion and VMware High Availability (HA), you can use an expensive SAN. In order to decrease the hardware complexity and cost, architects are nowadays turning towards distributed file systems. Don’t get me wrong, distributed file systems such as GlusterFS, developed by more than 70 software engineers, is a great file system. But in the end it is a file system, it is designed to store files and was never designed to run Virtual Machines. But a clever engineer saw that GlusterFS solved his problem of a unified namespace, making all files on the file system available on every host. But let us take a step back to see the unified-namespace-problem in perspective. What we want to achieve is that the Virtual Machine volume, a block device, of a Virtual Machine can move from one host to another host without having to bring the VM down. Instead of using a distributed file system to store the virtual disk, Open vStorage uses a different less complex approach to tackle this problem. We store the volumes on a shared backend. To make sure that VM data isn’t accessed at the same time by two hosts we have created a rule within Open vStorage that a volume can only by attached to one host. We store the ‘owner’ of the host in a distributed database. When vMotion is triggered, the 2 hosts work together to make the transition. Once the volume is available on the second host, the 2 hosts sync the data which is not yet on the backend and the move is complete. To summarize instead of making all Virtual Machine volumes available on all hosts all the time, Open vStorage makes sure that a volume is connected to a single host at the right time.
- To support multiple hypervisors and multiple storage backends, we also took an easy approach. Instead of trying to convert each hypervisor type directly into different storage protocols, we created a front and a backend. The complexity was greatly reduced as we only need to write per hypervisor a storage frontend and immediately the new hypervisor can talk with all storage backends. The same goes for a new backend: write a new backend extension and all hypervisors can use that backend. This flexibility does not only minimize the development effort but on top different hypervisors can share the same storage pool. Also as we use raw volumes, moving a Virtual Machine from VMware to KVM could (in the future) just be a reboot on the right host, no conversion needed.
Now back to the title, you can change a flat tyre in different ways, one way it to just replace the flat the tyre. Or, you could instead strip the whole car from around that flat tyre and reassemble it around a new tyre. This way you fix the same issue but the cost will be higher, the process will be more complex and you will probably need more than 70 engineers to fix your flat tyre.