Open vStorage US Roadshow Q2

After the successful first Open vStorage Roadshow, we decided to do a second US Roadshow. You can meet us during one of the following Meetups in the US:

During these meetups we will discuss what Open vStorage exactly does and the latest developments around the project such as how to set up Open vStorage in a HyperConverged fashion on local disks and the new metadata server architecture. We will of course provide pizzas and drinks during these meetups!

Next to these community events, we are also organizing 2 business events in the Bay Area (Santa Clara 04/16, Menlo Park 04/20). During these business events we will discuss how to setup a profitable IAAS (Infrastructure as a Service) or private cloud business and unveil our HyperConverged solution build on top of OpenStack and our own Open vStorage. This solution will include 24/7 support for Open vStorage as well as OpenStack! You can register for one of these free, business events here.

You like open-source, storage and writing code?

You like open-source, storage and writing code and you are interested in contributing towards the development and adoption of both Open vStorage and Kinetic technology? Yes? Well in that case you can write code during your spare time and contribute that to the project. Something we of course highly appreciate and will lead to a (Belgian) beer when we meet in real life. But we are aware that some people are looking for a more intense relationship, full-time as freelancer or on the payroll. If you would love to contribute to this exciting project, let us know! I will be sitting by my mailbox ( waiting for you. We have developers from everywhere in the world working on this project so we accept candidates from all over the world.

We are looking for people who have following skills:

  • In depth knowledge of FUSE, NFS, GlusterFS, Ceph
  • In depth knowledge of QEMU, VMware, KVM, Xen, Docker
  • Hardcore C++ or OCaml coder
  • Experience in kernel or device drivers
  • Experience with distributed applications such as Hadoop (optional)

Hamburgers, french fries and hyperconvergence

hamburgerDuring the first Open vStorage roadshow in the US, I noticed people have a lot of questions about convergence and hyperconvergence:

Can you help me with the term “hyperconverged”? I believe it is a marketing buzzword, but it is something that my executives have glommed onto.

While I was waiting to fly back home, I was eating a burger and french fries. Let’s be honest, the US has the best places to eat burgers but while eating and staring at the planes, I suddenly had an epiphany on how to explain convergence, hyperconvergence and how Open vStorage is related: burgers and french fries.

Let’s say that hamburgers are the compute (RAM, CPU, the host where the VM’s are running), french fries are the storage and let barbecue sauce be the storage performance. In that case a converged solution is like ordering a hamburger menu. One SKU will get you a plate with a hamburger and french fries on the side. You even have different menus with smaller or bigger hamburgers and more or less french fries. When you order a ‘converged burger’ the barbecue sauce will be on the french fries (SSDs inside the SAN). It works but it is not ideal. With a ‘hyperconverged burger’, instead of receiving french fries separately, you will receive a single hamburger with french fries and barbecue sauce as topping of the burger. Allow me to explain. With a hyperconverged appliance both the compute (hamburger), the Tier 1 (barbecue sauce) and Tier 2 (french fries) storage are inside the same appliance. Open vStorage is none of the previous. With Open vStorage, the hamburger will be topped with barbecue sauce (compute and Tier 1 inside the same host) but you get the french fries on the side.

As said, Open vStorage should not be used as a hyperconverged solution like Nutanix or Simplivity. The Open vStorage software allows to be used like that but we at CloudFounders don’t believe hyperconverged is the right way to build scalable solutions. We believe a converged solution with Tier 1 inside the compute, let’s call it flexi-converged, is a much better fit for multiple reasons:

  • Storage growth: typically storage needs grows 3 times faster than CPU needs. So adding more compute (CPU & RAM, hypervisors) just because you need more Tier 2 backend storage is just throwing away money. If you go to a hamburger restaurant and you want more french fries, you just order another portion of fries. It just doesn’t make sense to order another hamburger (with french fries as topping) if you only want french fries.
  • Storage performance: since a hyperconverged appliance only has a limited amount of bays, you have to decide between adding an SSD or a SATA drive in a bay. You need the SSDs for performance so that limits the available bays for capacity optimized SATA disks. A hyperconverged appliance makes a trade-off between storage performance (more flash) and storage capacity (more SATA). As a result you end up with appliances costing $180,000! which can run 100 Virtual Machines but can store only a total amount of 5TB (15TB raw) worth of data. Due to the 3-way replication, storing all data 3 times for redundancy reasons, the balance is completely off: each Virtual Machine can only have 50GB of data! What you want is to be able to scale both storage capacity and storage performance independently. Let’s make it clearer. When you order the ‘hyperconverged burger’ you get a burger with barbecue sauce and french fries on top of the burger. Since every burger has a certain size, there is a limit to the amount of french fries and barbecue sauce you can add as topping to the burger. If you want more french fries, you will have to cut back on the barbecue sauce. It is as simple as that. With Open vStorage, the french fries are on the side so you can order as many additional portions as needed. With the Seagate Kinetic integration you can simply add the additional drives to your pool of Tier 2 backend storage, et voilà, you have more space for Virtual Machine data without having to sacrifice storage performance.
  • Performance of the backend: when implementing a Tiered architecture, you don’t want your Tier 2 storage layer (‘the cold storage’) to limit the performance of your Tier 1 layer (‘the caching layer’). The Tier 1 is expensive and optimized for storage performance by using SSDs or PCIe flash so it is a big issue if the speed of the Tier 2 storage becomes a bottleneck to digest data coming from Tier 1. The performance of the Tier 2 storage is determined by the amount of disks and their speed. This is why you see hyperconverged models using two 1TB disks instead of a single 2TB disk. They need the spindles in the backend to make sure the Tier 1 caching layer isn’t impacted by a choking backend. This is a real issue. At CloudFounders we have had situations in the past where had to add disks to the backend just to make sure it could digest what is coming from the cache. Let’s do the math to explain the issue in more detail. Your Tier 1 can easily do 50-70K IOPS of 4K blocks. Let’s assume that this is a mix: 20K write IOPS and 50K read IOPS. The SSD/PCIe flash card will take the first hit for these 20K write IOPS (which is a piece of cake for a flash technology) but once data is evicted from that SSD it needs to go to the backend. Storage solutions will typically do some aggregation of those 4K writes into bigger chunks (Nutanix creates 1MB (4k*250) chunks, Open vStorage accumulates 1000 writes into objects of 4MB) to minimize the backend traffic. So Nutanix needs to store in the optimal scenario 80 IOPS (20K/250) to the backend. This is the optimal scenario as they don’t work with a append-style log but we will devote another blog post to this. Nutanix uses 3 way replication so 80 IOPS become 240 IOPS across multiple disks. These disks contain a file system so there is some additional IO overhead as each hop in the directory structure is another IO. Let’s assume for the sake of simplicity that we only have to go down 1 directory but it could be more hops. So in total to store the 80 IOPS coming out of the cache , you need at least 480 backend IOPS to store it on disk. A normal SATA disk does 90 IOPS so you see that these backend disks become a bottleneck real quickly. In our simple use case we would at least need 6 drives to make sure we can accommodate the data coming from the cache. If among the read IO, which we didn’t take into account, there is also cache misses, those 6 drives will not be enough. It is really painful and costly to add additional SATA disks to your backend which is only 20% full just to make sure you have the spindles to accommodate the data coming from your Tier 1. This is also why Open vStorage likes the Seagate Kinetic drives. The Kinetic drives don’t have a file system so for Open vStorage they are an IOPS saver. If you take the same amount of SATA drives and Seagate Kinetic drives, the Kinetic drives will outperform the SATA drives in our use case. Although Open vStorage supports Ceph and Swift, which use a file system on their OSD, that is why we prefer the Kinetic drives as they provide better performance for the same amount of drives. The Seagate Kinetic drives really are a valuable asset to our portfolio of supported backends.
  • Replace broken disks: the trend is to replace bigger chunks of hardware when they fail. Google, a company hyperconverged solutions like to refer to, has been doing it for years. They no longer care about a broken disk and replace complete servers. Those big storage arrays are made to leave dead disks behind and add a new nodes and only replace the node once X% has failed. You don’t want to go to the datacenter every time a disk fails, with hyperconverged appliances you simply can’t risk leaving a dead disk as you need the spindles for the backend performance. Storage maintenance also mains you need to move VMs of that host which is always a risk.

So let’s look back at what we learned:

  • The lesson learned from converged solutions is that a single SKU makes sense. You have a single point of contact to praise or blame.
  • The lesson learned from hyperconverged solutions is that having your caching layer inside the host is the best solution. Keeping the compute and read and write IO as close as possible makes sense. Having your cold storage inside the same appliance isn’t a good idea for reasons of scalability and performance.
  • Open vStorage keeps these these lessons in mind: it keeps the Tier 1 inside the compute host but allows to scale storage performance and capacity independently. Using the Seagate Kinetic drives as Tier 2 storage makes sense as it is an easy way to increase the backend storage performance.

To summarize, a converged solution with Tier 1 in the host and a scalable backend on top of Kinetic drives is in every aspect a much better solution compared to a traditional converged or hyperconverged solution if you want to build a cost-effective, scalable solution. The world has been making hamburgers for more than 100 years and we came to to conclusion that having the french fries on the side is the best option. By putting the fries as topping on the burger you are in for a mess so in that spirit let’s also not do it with our compute and (cold) storage.

Open vStorage by CloudFounders

basementIn a recent conference call an attendee expressed the following:

There is a real company behind Open vStorage? I thought this was a project done by 2 guys in their basement.

There is a big misconception about open-source projects. Some of these projects are indeed started and maintained by 2 guys in their basement. But on the other hand you see more and more projects where a couple of hundred people contribute. Take as an example OpenStack. To this open-source project companies such as Red Hat, IBM, HP, Rackspace, SwiftStack, Mirantis, Intel and many more are contributing code and are actually paying people to work on the project.

Open vStorage is a similar project being backed up by a real company: CloudFounders. At CloudFounders we love to build technology. People working for CloudFounders have done this for companies such as Oracle/Sun, Symantec, Didigate/Verizon, Amplidata and many more leading technology companies. We have also been active in the open-source community with projects such as Arakoon, our distributed key-value store.

The technology behind Open vStorage is not something we wrapped together over the last 6 months by gluing some open-source components together and being coated with a nice management layer. The core technology, which basically turns a bucket on your favorite object store into a raw device, is developed from scratch by the CloudFounders R&D and engineering team. We have been working for more than 4 years on the core. We have used the technology in our commercial product, vRun, but decided the best way forward is to open-source the technology. We believe software -defined storage is too important a piece of the virtualization stack for a proprietary solution that is either hypervisor specific, hardware specific, management stack specific and storage backend specific. With Open vStorage we want to build an open and non-proprietary storage layer but foremost something modular enough which allows developers to innovate on top of Open vStorage.

PS. According to Ohloh, Open vStorage has had 1,384 commits made by 14 contributors representing 55,404 lines of code!

Open vStorage Roadshow: last chance to register

To start 2015 in style, the Open vStorage team is doing a small roadshow in the US and Canada. During these presentations we will discuss the upcoming features in Open vStorage and how we will commercially launch solutions based on Open vStorage. The Toronto and Boston Meetup will be a joint session together with Midokura. During the San Francisco session James Hughes, Principal Technologist at Seagate, will also join us and provide an update on the Kinetic Open Storage platform.

Open vStorage Roadshow:

Note that registration is required and there is an attendee limit.

PS. In case you organize an OpenStack User Group and would like to host an Open vStorage session, contact us by email at

2015, the Open vStorage predictions

New Year Wallpaper.The end of 2014 is near so it is time to look forward and see what 2015 will have to offer. Some people say there’s no point in making predictions and that it’s not worth speculating because nothing is set in stone and things change all the time in storage land. Allow us to prove these people wrong by sharing our 2015 storage predictions*:

Acceleration of (hyper-)converged platforms
Converged platforms are here to stay. Converged solutions are even the fastest growing segment for large storage vendors. But it will be players like Open vStorage who will really break through in 2015. Hyperconverged solutions showed that there is an alternative to expensive SAN or all-flash solutions by adding a software based caching layer to the host. Alas, these overhyped hyperconverged solutions are even more expensive per TB storage than an already expensive all-flash array. In 2015 we will see more solutions which unite the good of the hyperconverged appliance and the converged platform but at a significantly lower cost. Storage solutions that will be extremely hot and prove to be future proof will have to have following characteristics:

  • Caching inside the (hypervisor) host: caching on SSD or PCIe flash should be done inside the host and not a couple of hops down the network.
  • Scalability: all-flash will continue to be a waste of money due to the huge cost of flash. It is better to go for a Tiered solution: Tier 1 on flash, Tier 2 on scalable, cheap (object) storage. In case the Tier 1 and Tier 2 storage are inside the same appliance (hyperconverged), your scalability and flexibility will be limited. A much better solution is to keep the 2 Tiers completely separate in a different set of appliances but managed by the same software layer.
  • Open and programmatically: storage silo’s should in 2015 be something of the past. Integration and openness should be key. Automation will be one of the hottest and most important features of a storage solution.

It should not come as a surprise that Open vStorage checks all of the above requirements.

OpenStack will dominate the cloud in 2015
This is probably the most evident prediction. During the OpenStack conference in Paris it was very clear that OpenStack will dominate the cloud the next few years. In 2014 some new kids showed support for OpenStack such as VMware (they understand that hypervisor software is now a commodity and that the data center control plane has become the high-margin battle arena). With VMware releasing their own OpenStack distribution the OpenStack distribution battlefield will be crowded in 2015. We have RedHat, Ubuntu, Mirantis, HP, VMware and many more so it is safe to say that some consolidation will happen in this area.
A new OpenStack battlefield that will emerge in 2015 will be around OpenStack storage. Currently this area is being dominated by the traditional arrays but as the software-defined storage solutions gain traction, solutions such as Open vStorage will grab a huge market share from these traditional vendors. They can compete with these SANs and all-flash arrays as they offer the same features and have the benefit of a much lower cost and TCO. While they maybe not top the total revenue achieved by the big vendors, they will definitely seize a large share of the OpenStack storage market.
If we add the fact that the Chinese government is promoting OpenStack and open-source in general, you can bet your Christmas bonus on the fact that open-source (OpenStack) storage projects (Swift, Open vStorage, …) will be booming next year. These projects will get a lot of support from Chinese companies both in adoption and development. It will be essential for traditional high-tech companies and countries not to miss the boat as once it has left the harbor it will be very hard to catch up.

New markets for object storage vendors
2015 will be the year where object storage will break out of its niche market of large video or object repositories. This has been said for many years now but 2015 will be THE year as many companies have started to realize which benefits they achieved by implementing their first object storage projects. The next target for these enterprises is to make better use of their current object storage solution. Changing all of their legacy code will not happen in 2015 as this might impact their business. Solutions where they don’t have to change their existing code base and still benefit from the cost saving of object storage will be selected. Open vStorage is one of those solutions but we are pretty sure other solutions like for example storage gateways to object storage will flourish in 2015.
Another reason why object storage vendors will enter new markets is because currently too many players are after the same customer base. This means that if they want to keep growing and provide ROI for the venture capital invested, new markets will definitely need to be addressed. The 15-25 billion dollar SAN market is a logical market to address. But entering this market will not be a bed of roses as object storage vendors have no experience in this highly-competitive market or sometimes not even the right sales competencies and knowledge. They will have to look for partnerships with companies such as CloudFounders who are seasoned in this area.

Seagate Kinetic drives
The Kinetic drives are the most exciting, fundamental change in the storage industry in several years. These drives went GA at the end of 2014 but in 2015 we will gradually see new applications and solutions who make use of this technology. With these IP drives, you will, for the first time, be able to manage storage as a scalable pool of disks. Open vStorage will support the Kinetic drives as Tier 2 backend. This means Virtual Machine will have their hot data inside the host on flash and their cold data on a pool of Kinetic drives.

* We will look back on this post at the end of 2015 to see how good we scored.

Open vStorage Q1 US Roadshow

OpenStack started to gain momentum in 2014 but will really kick off in 2015. Gartner has put software-defined storage in its top 10 technology trends for 2015. The amounts of data will continue to grow as the Internet of Things is unfolding before our eyes. As Open vStorage is in the middle of OpenStack and storage,  2015 will be the year of Open vStorage. To kick off the year in style we are doing a US Roadshow in cooperation with local OpenStack User Groups. We had to make some heartrending decisions and  disappoint some groups as we couldn’t do a Meetup in their community but below are the lucky ones for the first  Open vStorage Roadshow:

During this session we will discuss the current status of OpenStack storage projects and give an overview of the new  features in Open vStorage 2.0 (to be released in Q1 2015).

Note that registration is required and there is an attendee limit.

PS. In case you organize an OpenStack User Group and would like to host an Open vStorage session, contact us by email at

Open vStorage is hot!

Open vStorage is hot! You can find the most important video’s, podcasts and articles of the last few weeks here:

  • OpenStack Online Meetup: How does OpenStack Swift play with Open vStorage? Podcast with Swift PTL John Dickinson and our own Wim Provoost as they discuss OpenStack storage and how Swift and Open vStorage, 2 open-source projects, work together to shake the OpenStack eco-system.
  • The Register: Belgian upstart to ‘bridge gap’ between object, block storage. Or that’s the plan at least . Learn more about the latest developments in the Open vStorage project: support for Seagate Kinetic drives. Open vStorage turns a pools of these disks into a storage solutions which has the  features of a high-end SAN (performance, zero-copy snapshots, thin cloning) but is also scale-out and low-cost, like an object storage solution.
  • Open vStorage and OpenStack – the Storage Switzerland take. Podcast with George Crump, founder and lead analyst at Storage Switzerland and Wim Provoost, Product Manager of Open vStorage. They discuss the different storage  projects in OpenStack and how the Open vStorage Cinder plugin bridges the gap between Swift and Cinder.
  • OpenStack® Summit: CloudFounders Open vStorage: CLoudFounders’ CTO Stefaan Aelbrecht explains to HP what Open vStorage is about and where it comes from.

Meet Open vStorage in Paris or Frankfurt!

Open vStorage is an open-source project which is supported by a spirited community. Without this community Open vStorage would not be the success it is today. To boost that community, CloudFounders (the sponsor behind Open vStorage) will be hosting an Open vStorage booth at following events:

We would like to welcome on our booth contributors, users and supporters. In case you would like to discuss things with the team or have burning questions, we are there for you. So, look for the CloudFounders/Open vStorage booth and stop by to say “HI” or “Bonjour”.