The Edge, a lightweight block device

edge block storageWhen I present the new Open vStorage architecture for Fargo, I almost always receive the following Edge question:

What is the Edge and why did you develop it?

What is the Edge about?

The Edge is a lightweight software component which can be installed on a Linux host. It exposes a block device API and connects to the Storage Router across the network (TCP/IP or RDMA). Basically the applications believes it talks to a local block device (the Edge) while the volume actually runs on another host (Storage Router).

Why did we develop the Edge?

The reason why we have developed the Edge is quite simple: componentization. With Open vStorage we are mainly dealing with large, multi-petabyte deployments and having this Edge component gives additional benefits in large environments:


In large environments you want to be able to scale the compute and storage part independently. In case you run Open vStorage hyper-converged, as advised with earlier versions, this isn’t possible. This has as consequence that if you need more RAM or CPU to run VMs, you had to also invest in more SSDs. With the Edge you can scale compute and storage independent.

Guaranteed performance

With Eugene the Volume Driver, the high performance distributed block layer, was running on the compute host together with the VMs. This results in the VMs and the Volume Driver fighting for the same CPU and RAM resources. This is a typical issue with hyper-converged solutions. The Edge component avoids this problem as it runs on the compute hosts (and requires only a small amount of resources) and the Volume Drivers runs on dedicated nodes and hence provides a predictable and consistent amount of IOPS to the VMs.

Limit the Impact of Updates

Storage software updates are a (storage) administrator’s worst nightmare. In previous Open vStorage versions an update of the Volume Driver required all VMs on that node to be migrated or brought down.With the Edge the Volume Driver can be updated in the background as each Edge/compute host has HA features and can automatically connect to another Volume Driver on request without the need of a VM migration.

From A(pp) to B(ackend) – no compromise

NoCompromiseWhile giving presentations I often get the question how Open vStorage is different compared to other block or scalable storage solutions in the market. My answer to that question is the following:

It is the only no-compromise storage platform as it combines the best of block, file and object storage into one storage platform.

Allow me to explain in more detail why I’m confident that Open vStorage fits that description. For many readers the first part (Block and Object) will be well known but for the sake of clarity I’d like to start with it.

Block and object:

Today there are 2 types of storage solutions which matter in the storage market: block and object storage:

  • Block storage, typically used for Virtual Machines and IO-intensive application such as databases, are best known for their performance. They provide high bandwidth, low latency storage and their value is typically addressed in IOPS/$. They also offer advanced data management features such as zero-copy snapshots, linked clones etc.The drawback of these block storage solutions is that they have limited scalability and are constrained to a single location. SANs, the most common block storage solution these days, are not only vulnerable to site failures but even a 2 disk failure can cause major data loss. Traditional names selling block storage are EMC, Netapp (3PAR) and almost all big name vendors have a flag-ship SAN.
  • Object Storage, typically used to store files and backups, are designed to be extremely scalable. To make sure data is stored safely against every possible disaster, data gets distributed across multiple locations. This distributed approach comes at the cost of high latency and low bandwidth compared to block storage. Object Storage solutions also only offer a simple interface (get/put) without the advanced data management features. SwiftStack (Swift), Amplidata, Cleversafe and Scality are well-known names which are selling Object Storage solutions.

If you analyse the pro’s and cons of both solutions, it is easy to see that these are 2 completely different but complementary solutions.

 ,Block Storage,Object Storage
+, High performance & low latency & advanced data management,Highly distributed & fault tolerant& highly scalable
-, Limited scalability & single site,Slow performance & no snapshots and clones [/table]

Data Flow:

If you look how Open vStorage takes care of the data flow from an application to the backend and back, it is easy to see that Open vStorage is no-compromise storage. It basically takes the best of both the block and the object world and combines it into a single solution. Allow me to explain by means of the different layers Open vStorage is built upon:

Open vStorage Data Flow

Open vStorage offers to applications a wide set of access protocols: Block (QEMU,native/iSCSI), File (NFS/SMB), Object (S3 & Swift), HDFS and many more.Underneath this pass-through interface layer which offers all these different protocols, all IO requests receive a performance boost by the Acceleration Layer. This layer exposes itself as a block storage layer and uses SSDs, PCIe-flash cards and a log structured approach to offer unmatched performance. On a write, data gets appended to the write buffer of that application and immediately acknowledged to the application. This allows for sub-millisecond latency as required by databases. On top, for example each virtual disk will have its own write buffer and hence the IO-blender effect is completely eliminated.

Once data leaves the Acceleration Layer, it goes into the Data Management Layer which offers the same data management functionality as high-end SANs: zero-copy snapshots, quick cloning, Distributed Transaction Logs (protection against an SSD failure) and many more. After the Data Management Layer, data goes to the Distribution layer. In this layer incoming writes which are bundles by the Acceleration layer in Storage Container Objects (SCOs) are optimized to be always accessible at a minimal overhead. Typically each object (a collection of consecutive writes) will be chopped into different fragments and extended with some parity fragments. These fragments are in the end stored across different nodes or even datacenters.
The next layer takes care of the optional encryption and compression of the different fragments before they are dispatched with the appropriate write protocol of the backend.

If you look at this dataflow from a distance, you will see that the Acceleration and Data Management Layer are giving Open vStorage the positive features of block storage: superb performance, low latency, zero-copy snapshots, quick cloning etc. The Distribution and Compression layer are giving Open vStorage the favorable features of object storage: scalability, highly distributed, ability to survive site failures etc.

To conclude, Open vStorage truly is the only storage solution which combines the best of both the block and object storage world in a single solution. Told you so!