Connecting Open vStorage with Amazon

In an earlier blog post we already discussed that Open vStorage is the storage solution to implement a hybrid cloud. In this blog post we will explain the technical details on how Open vStorage can be used in a hybrid cloud context.

The components

For frequent readers of this blog the different Open vStorage components should not hold any secrets anymore. For newcomers we will give a short overview of the different components:

  • The Edge: a lightweight software component which exposes a block device API and connects across the network to the Volume Driver.
  • The Volume Driver: a log structured volume manager which converts blocks into objects.
  • The ALBA Backend: an object store optimized as backend for the Volume Driver.

Let’s see how these components fit together in a hybrid cloud context.

The architecture

The 2 main components of any hybrid cloud are an on-site, private part and a public part. Key in a hybrid cloud is that data and compute can move between the private and the public part as needed. As part of this thought exercise we take the example where we want to store data on premises in our private cloud and burst with compute into the public cloud when needed. To achieve this we need to install the components as follows:

The Private Cloud part
In the private cloud we install the ALBA backend components to create one or more storage pools. All SATA disks are gathered in a capacity backend while the SSD devices are gathered in a performance backend which accelerates the capacity backend. On top of these storage pools we will deploy one or more vPools. To achieve this we run a couple of Volume Driver instances inside our private cloud. On-site compute nodes with the Edge component installed can use these Volume Drivers to store data on the capacity backend.

The Public Cloud part
For the Public Cloud part, let’s assume we use Amazon AWS, there are multiple options depending on the desired performance. In case we don’t require a lot of performance we can use an Amazon EC2 instance with KVM and the Edge installed. To bring a vDisk live in Amazon, a connection is made across the internet With the Volume Driver in the private cloud. Alternatively an AWS Direct Connect link can be used for a lower latency connection. Writes to Vdisk which is exposed in Amazon will be sent by the Edge to the write buffer of the Volume Driver. This means that writes will only be acknowledged to the application using the vDisk once the on premises located write buffer has received the data. Since the Edge and the Volume Driver connect over a rather high latency link, the write performance isn’t optimal in this case.
In case more performance is required we need an additional Storage Optimized EC2 instance with one or more NVMe SSDs. In this second EC2 instance a Volume Driver instance is installed and the vPool is extended from the on-site, private cloud into Amazon. The NVMe devices of the EC2 instance are used to store the write buffer and the metadata DBs. It is of course possible to add some more EBS Provisioned IOPS SSDs to the EC2 instance as read cache. For an even better performance, use dedicated Open vStorage powered cache nodes in Amazon. Since the write buffer is located in Amazon the latency will be substantially lower than in the first setup.

Use cases

As last part of this blog post we want to discuss some use cases which can be deployed on top of this hybrid cloud.

Analytics
Note that based upon the above architecture, a vDisk in the private cloud can be cloned into Amazon. The cloned vDisk can be used for business analytics inside Amazon without impacting the live workloads. When the analytics query is finished, the clone can be removed. The other way around is of course also possible. In that case the application data is stored in Amazon while the business analytics run on on-site compute hardware.

Disaster Recovery
Another use case is disaster recovery. As disaster recovery requires data to be on premises but also in the cloud additional instance need to be added with a large amount of HDD disks. Replication or erasure coding can be used to spread the data across the private and public cloud. In case of a disaster where the private cloud is destroyed, one can just add more compute instances running the Edge to bring the workloads live in the public cloud.

Data Safety
A last use case we want to highlight is for users that want to use public clouds but don’t thrust these public cloud providers with all of their data. In that case you need to get some instances in each public cloud which are optimized for storing data. Erasure coding is used to chop the data in encrypted fragments. These fragments are spread across the public clouds in such a way that non of the public clouds store the complete data set while the Edges and the Volume Drivers still can see the whole data set.

Hybrid cloud, the phoenix of cloud computing

Introduction


Hybrid cloud, an integration between both on-site, private and public clouds, has been declared dead many times over the past few years but like a phoenix it keeps on resurrecting in the yearly IT technology and industry forecasts.

Limitations, hurdles and issues

Let’s first have a look at the numerous reasons why the hybrid cloud computing trend hasn’t taken off (yet):

  • Network limitations: connecting to a public cloud was often cumbersome as it requires all traffic to go over slow, high latency public internet links.
  • Storage hurdles: implementing a hybrid cloud approach means storing data multiple times and keeping these multiple copies in sync.
  • Integration complexity: each cloud, whether private or public, has its own interface and standards which make integration unnecessary difficult and complex.
  • Legacy IT: existing on-premise infrastructure is a reality and holds back a move to the public cloud. Next to the infrastructure component, applications were not built or designed in such a way that you can scale them up and down. Nor are they designed to store their data in an object store.

Taking the above into account it shouldn’t come as a surprise that many enterprises saw public cloud computing as a check-in at Hotel California. The technical difficulties and the cost and the risk of moving back and forth between clouds was just too big. But times are changing. According to McKinsey & Company, a leading management consulting firm, over the next 3 years enterprises are planning to transition IT workloads at a significant rate and pace to a hybrid cloud infrastructure.

Hybrid cloud (finally) taking off

I see a couple a reasons why the hybrid cloud approach is finally taking off:

Edge computing use case
Smart ‘devices’ such as self driving cars are producing such large amounts of data that they can’t rely on public clouds to process it all. The data sometimes even drives real-time decisions where latency might be the difference between life or dead. Evolutionary, this requires that computing power shifts to the edges of the network. This Edge or Fog Computing concept is a textbook example of a hybrid cloud where on-site, or should we call it on-board, computing and centralized computing are grouped together into a single solution.

The network limitations are removed
The network limitations have been removed by services like AWS Direct Connect. With these you have a dedicated network connection from your premises to the Amazon cloud. All big cloud providers now offer the option for a dedicated network into their cloud. Pricing for dedicated 10GbE links in metropolitan regions like New York have also dropped significantly. For under $1.000 a month you can now get a sub millisecond fibre connection from most building in New York to one of the many data centers in New York.

Recovery realisation
More and more enterprises with a private cloud realise the need for a disaster recovery plan.
In the past this meant getting a second private cloud. This approach multiplies the TCO by at least a factor 2 as twice the amount of hardware needs to be purchased. Keeping both private clouds in sync makes disaster recovery plans only more complex. Instead of making disaster recovery a cost, enterprises are now turning disaster recovery into an asset instead of a cost. Enterprises now use cheap, public cloud storage to store their off-site backups and copies. By adding compute capacity in peak periods or when disaster strikes they can bring these off-site copies online when needed. On top, additional business analytics can also use these off-site copies without impacting the production workloads.

Standardization
Over the past years standards in cloud computing have crystallized. In the public cloud Amazon has set the standard for storing unstructured data. On the private infrastructure side, the OpenStack ecosystem has made significant progress in streamlining and standardizing how complete clouds are deployed. Enterprises such as Cisco for example are now focussing on new services to manage and orchestrate clouds in order to smooth out the last bumps in the migration between different clouds.

Storage & legacy hardware: the problem children

Based upon the previous paragraphs one might conclude that all obstacles to move to the hybrid model have been cleared. This isn’t the case as 2 issues still strike up:.

The legacy hardware problem
All current public cloud computing solutions ignore the reality that enterprises have a hardware legacy. While starting from scratch is the easiest solution, it is definitely not the cheapest. In order for the hybrid cloud to be successful, existing hardware must in some form or shape be able to be integrated in the hybrid cloud.

Storage roadblocks remain
In case you want to make use of multiple cloud solutions, the only solution you have is to store a copy of each bit of data in each cloud. This x-way replication scheme solves the issue of data being available in all cloud locations but it solves it at a high cost. Next to the replication cost, replication also adds significant latency as writes can only be acknowledged if all location are up to date. This means that in case replication is used hybrid clouds which span the east and west coast of the US are not workable.

Open vStorage removes those last obstacles

Open vStorage, a software based storage solution, allows multi-datacenter block storage in a much more nimble and cost-effective way than any traditional solution. This way it removes the last roadblocks towards the hybrid cloud adoption.
Solving the storage puzzle
Instead of X-way replication Open vStorage uses a different approach which can be compared to solving a Sudoku puzzle. All data is chopped up in chunks and additionally some parity chunks are adjoined. All these chunks, the data and parity chunks, are distributed across all the nodes, datacenters and clouds in the cluster. The amount of parity chunks can be configured but allows for example to recover from a multi node failure or a complete data center loss. A failure, whether it is a disk, node or data center will cross out some numbers from the complete Sudoku puzzle but as long as you have enough numbers left, you can still solve the puzzle. The same goes for data stored with Open vStorage: as long as you have enough chunks (disks, nodes, data centers or clouds) left, you can always recover the data.
Unlike X-way replication where data is only acknowledged once all copies are stored safely, Open vStorage allows to store data sub-optimally. This has as big advantage that it allows to acknowledge writes in case not all data chunks are written to disk. This makes sure that a single slow disk, datacenter or cloud, doesn‘t detain applications and incoming writes. This approach lowers the write latency while keeping data safety at a high level.

Legacy hardware
Open vStorage also allows to include legacy storage hardware. As Open vStorage is a software based storage solution, it can turn any x86 hardware into a piece of the hybrid storage cloud.
Open vStorage leverages the capabilities of new media technologies like SSDs and PCI-e flash but also those of older technologies like large capacity traditional SATA drives. For applications that need above par performance additional SSDs and PCI-e flash cards can be added.

Summary

Hybrid Cloud has long been a model that was chased by many enterprises without any luck. Issues such as network and storage limitations and integration complexity have been major roadblocks on the hybrid cloud path. Over the last few years a lot of these roadblocks have been removed but issues with storage and legacy hardware remained. Open vStorage overcomes these last obstacles and paves the path towards hybrid cloud adoption.

2017, the Open vStorage predictions

2017, the Open vStorage predictions
2017 promises to be an interesting year for the storage industry. New technology is knocking at the door and present technology will not surrender without a fight. Not only new technology will influence the market but the storage market itself is morphing:

Further Storage consolidation

Let’s say that December 2015 was an appetizer with Netapp buying Solidfire. But in 2016 the storage market went through the first wave of consolidation: Docker storage start-up ClusterHQ shut its doors, Violin Memory filed for chapter 11, Nutanix bought PernixData , Nexgen was acquired by Pivot 3, Broadcom acquired Brocade, Samsung acquired Joyent. Lastly there was also the mega merger between storage mogul EMC and Dell. This consolidation trend will continue in 2017 as the environment for hyper-converged, flash and object storage startups is getting tougher because all the traditional vendors now offer their own flavor. As the hardware powering these solutions is commodity, the only differentiator is software.

Some interesting names to keep an eye on for M&A action or closure: Cloudian, Minio, Scality, Scale Computing, Stratoscale, Atlantis Computing, HyperGrid/Gridstore, Pure Storage, Tegile, Kaminario, Tintri, Nibmle Storage, Simplivity, Scale Computing, Primary Data, … We are pretty sure some of these name will not make it past 2017.

Open vStorage has already a couple of large projects lined up. 2017 sure looks promising for us.

The Hybrid cloud

Back from the dead like a phoenix. I expect a new live for the the hybrid cloud. Enterprises increasingly migrated to the public cloud in 2016 and this will only accelerate, both in speed and numbers. There are now 5 big clouds: Amazon AWS, Microsoft Azure, IBM, Google and Oracle.
But connecting these public cloud with in-house datacenter assets will be key. The gap between public and private clouds has never been smaller. AWS and VMware, 2 front runners, are already offering products to migrate between both solutions. Network infrastructure (performance, latency) is now finally also capable of turning the hybrid cloud into reality. Numerous enterprises will realise that going to the public cloud isn’t the only option for future infrastructure. I believe migration of storage and workloads will be one of the hottest features of Open vStorage in 2017. Hand in hand with the migration of workloads we will see the birth of various new storage as a service providers offering S3, secondary but also primary storage out of the public cloud.

On a side note, HPE (Helion), Cisco (Intercloud) and telecom giant Verizon closed their public cloud in 2016. It will be good to keep an eye out on these players to see what they are up to in 2017.

The end of Hyper-Convergence hype

In the storage market prediction for 2015 I predicted the rise of hyper-convergence. Hyper-converged solutions have lived up to their expectations and have become a mature software solution. I believe 2017 will mark a turning point for the hyper-convergence hype. Let’s sum up some reasons for the end of the hype cycle:

  • The hyper-converged market is mature and the top use cases have been identified: SMB environments, VDI and Remote Office/Branch Office (ROBO).
  • Private and public clouds are becoming more and more centralised and large scale. More enterprises will come to understand that the one-size-fits-all and everything-in-a-single-box approach of hyper-converged systems doesn’t scale to a datacenter level. This is typically an area where hyper-converged solutions reach their limits.
  • The IT world works like a pendulum. Hyper-convergence brought flash as cache into the server as the latency to fetch data over the network was too high. With RDMA and round trip times of 10 usec and below, the latency of the network is no longer the bottleneck. The pendulum is now changing its direction as the so web-scalers, the companies on which the hyper-convergence hype is ented, want to disaggregate storage by moving flash out of each individual server into more flexible, centralized repositories.
  • Flash, Flash, Flash, everything is becoming flash. As stated earlier, the local flash device was used to accelerate slow SATA drives. With all-flash versions, these hyper-converged solutions go head to head with all-flash arrays.

One of the leaders of the hyper-converged pack has already started to move into the converged infrastructure direction by releasing a storage only appliance. It will be interesting to see who else follows.

With the new Fargo architecture which is designed for large scale, multi petabyte, multi datacenter environments, we already capture the next trend: meshed, hyper-aggregated architectures. The Fargo release supports RDMA, allows to built all-flash storage pools and incorporates a distributed cache across all flash in the datacenter. 100% future proof and ready to kickstart 2017.

PS. If you want to run Open vStorage hyper-converged, feel free to do so. We have componetized Open vStorage so you can optimize it for your use case: run everything in a single box or spread the components across different servers or even datacenters!

IoT storage lakes

More and more devices are connected to the internet. This Internet of Things (IoT) is posed to generate a tremendous amount of data. Not convinced? Intel research for example estimated that autonomous cars will produce 4 terabytes of data daily per car. These Big Data lakes need a new type of storage: storage which is ultra-scalable. Traditional storage is simply not suited to process this amount of storage. On top in 2017 we will see artificial intelligence increasingly being used to mine data in these lakes. This means the performance of the storage needs to able to serve real-time analytics. Since IoT device can be located anywhere in the world, geo-redundancy and geo-distribution are also required. Basically IoT use cases are a perfect match for the Open vStorage technology.

Some interesting fields and industries to follow are consumer goods (smart thermostats, IP cameras, toys, …), automotive and healthcare.

Software for the hybrid cloud future

cloudfutureThe demand for data storage shows no sign of slowing down. According to IDC, in 2012, external disk storage capacity rose 27 percent over the previous year to reach over 20 exabytes worth a staggering $24 billion in annual spending. Media and entertainment as well as web based services are consuming vast quantities of data storage and the arrival of big data analytics is accelerating the growth further. But quantity of storage is simply not enough, especially as organisations increasingly need to share data or make it available for inspection by third parties such as regulators or partners.

Organisations with the largest data repositories are looking at spiralling costs. The requirements to have data available secure and in a form that can undergo analysis through technologies such as Hadoop are becoming daunting.

The largest cloud providers such as Amazon, Google and Microsoft are using their economies of scale to offer ‘pay as you go’ type business models. But even their relatively low per GB costs quickly add up as data volumes continue to grow without expiry dates. It is also clear that certain use cases do not work with basic cloud services.

For example delivering VDI directly from Amazon S3 is feasible but probably not economically viable. While storing patient data in a public cloud would cause all kinds of compliance concerns for healthcare providers. Even traditional relational database driven applications are better suited to big iron on premise than clouds. In fact, the examples where moving data into the cloud offers fewer benefits than on premise are numerous.

Instead, the hybrid model is starting to become the preferred route for organisations that need some data to be close and performing and other sets to reside in cheaper and slower clouds. The building of these hybrid clouds is an early example of the power of software defined storage.

Let’s provide a use case that many will recognise. A large enterprise has a production environment with several critical applications including a transactional database, an ERP system and several web based applications. The enterprise could be within financial services, life sciences, M&E – it doesn’t really matter which industry as the most critical element is the petabyte of data spread between real-time and archive stores that the business needs to run its day-to-day operations and longer term commitments.

Like many large enterprises, it maintains its own data centre and several regional ICT hubs as well as a disaster recovery data centre. This disaster recovery position is kept perfectly synced with the main centre with the ability to failover rapidly to ensure that the business can continue even with a physical site issue such as fire, flood or similar disruption.

Sound familiar – it should – as the setup is used by enterprises across the globe. Enterprises spend trillions of dollars on redundant IT infrastructure which is often never used but still critical to guard against failure. In an ideal world, organisations could instead replicate data into a cloud and spin up a replacement environment only when needed. A complete production environment that would pop up only on demand based on an accurate snapshot of the current data position of the live environment.

Software Defined Storage and Open vStorage in particular has the capability to make this hybrid DR use case finally viable. As production environments move from physical to entirely virtualised workloads, SDS technologies that use time based storage allow for zero copy snapshot, cloning, and thin provisioning that are ideal for spinning up VMs from master templates in a fast and very efficient way. Better still, testing an entire disaster recovery position in a cloud would be relatively straight forward and repeatable.

But let’s not get ahead of ourselves. The technology is here and small scale examples for these hybrid clouds used for disaster recovery are starting to appear. However, the larger enterprises have entrenched positions and huge investments in the status quo so have less desire to move. However, the up and coming enterprises that have already embraced cloud and virtualisation will be able to benefit further from software defined storage in ways that will reduce their costs giving them a competitive advantage that rivals will be forced to counter. DR is just one use case where software defined will shake up the industry and from small scale examples that we are working on with our partners, the theories will be tested and the cost savings proven in the real world.

In our view, the future is bright, and the cloud future is certainly hybrid!