2017, the Open vStorage predictions

2017, the Open vStorage predictions
2017 promises to be an interesting year for the storage industry. New technology is knocking at the door and present technology will not surrender without a fight. Not only new technology will influence the market but the storage market itself is morphing:

Further Storage consolidation

Let’s say that December 2015 was an appetizer with Netapp buying Solidfire. But in 2016 the storage market went through the first wave of consolidation: Docker storage start-up ClusterHQ shut its doors, Violin Memory filed for chapter 11, Nutanix bought PernixData , Nexgen was acquired by Pivot 3, Broadcom acquired Brocade, Samsung acquired Joyent. Lastly there was also the mega merger between storage mogul EMC and Dell. This consolidation trend will continue in 2017 as the environment for hyper-converged, flash and object storage startups is getting tougher because all the traditional vendors now offer their own flavor. As the hardware powering these solutions is commodity, the only differentiator is software.

Some interesting names to keep an eye on for M&A action or closure: Cloudian, Minio, Scality, Scale Computing, Stratoscale, Atlantis Computing, HyperGrid/Gridstore, Pure Storage, Tegile, Kaminario, Tintri, Nibmle Storage, Simplivity, Scale Computing, Primary Data, … We are pretty sure some of these name will not make it past 2017.

Open vStorage has already a couple of large projects lined up. 2017 sure looks promising for us.

The Hybrid cloud

Back from the dead like a phoenix. I expect a new live for the the hybrid cloud. Enterprises increasingly migrated to the public cloud in 2016 and this will only accelerate, both in speed and numbers. There are now 5 big clouds: Amazon AWS, Microsoft Azure, IBM, Google and Oracle.
But connecting these public cloud with in-house datacenter assets will be key. The gap between public and private clouds has never been smaller. AWS and VMware, 2 front runners, are already offering products to migrate between both solutions. Network infrastructure (performance, latency) is now finally also capable of turning the hybrid cloud into reality. Numerous enterprises will realise that going to the public cloud isn’t the only option for future infrastructure. I believe migration of storage and workloads will be one of the hottest features of Open vStorage in 2017. Hand in hand with the migration of workloads we will see the birth of various new storage as a service providers offering S3, secondary but also primary storage out of the public cloud.

On a side note, HPE (Helion), Cisco (Intercloud) and telecom giant Verizon closed their public cloud in 2016. It will be good to keep an eye out on these players to see what they are up to in 2017.

The end of Hyper-Convergence hype

In the storage market prediction for 2015 I predicted the rise of hyper-convergence. Hyper-converged solutions have lived up to their expectations and have become a mature software solution. I believe 2017 will mark a turning point for the hyper-convergence hype. Let’s sum up some reasons for the end of the hype cycle:

  • The hyper-converged market is mature and the top use cases have been identified: SMB environments, VDI and Remote Office/Branch Office (ROBO).
  • Private and public clouds are becoming more and more centralised and large scale. More enterprises will come to understand that the one-size-fits-all and everything-in-a-single-box approach of hyper-converged systems doesn’t scale to a datacenter level. This is typically an area where hyper-converged solutions reach their limits.
  • The IT world works like a pendulum. Hyper-convergence brought flash as cache into the server as the latency to fetch data over the network was too high. With RDMA and round trip times of 10 usec and below, the latency of the network is no longer the bottleneck. The pendulum is now changing its direction as the so web-scalers, the companies on which the hyper-convergence hype is ented, want to disaggregate storage by moving flash out of each individual server into more flexible, centralized repositories.
  • Flash, Flash, Flash, everything is becoming flash. As stated earlier, the local flash device was used to accelerate slow SATA drives. With all-flash versions, these hyper-converged solutions go head to head with all-flash arrays.

One of the leaders of the hyper-converged pack has already started to move into the converged infrastructure direction by releasing a storage only appliance. It will be interesting to see who else follows.

With the new Fargo architecture which is designed for large scale, multi petabyte, multi datacenter environments, we already capture the next trend: meshed, hyper-aggregated architectures. The Fargo release supports RDMA, allows to built all-flash storage pools and incorporates a distributed cache across all flash in the datacenter. 100% future proof and ready to kickstart 2017.

PS. If you want to run Open vStorage hyper-converged, feel free to do so. We have componetized Open vStorage so you can optimize it for your use case: run everything in a single box or spread the components across different servers or even datacenters!

IoT storage lakes

More and more devices are connected to the internet. This Internet of Things (IoT) is posed to generate a tremendous amount of data. Not convinced? Intel research for example estimated that autonomous cars will produce 4 terabytes of data daily per car. These Big Data lakes need a new type of storage: storage which is ultra-scalable. Traditional storage is simply not suited to process this amount of storage. On top in 2017 we will see artificial intelligence increasingly being used to mine data in these lakes. This means the performance of the storage needs to able to serve real-time analytics. Since IoT device can be located anywhere in the world, geo-redundancy and geo-distribution are also required. Basically IoT use cases are a perfect match for the Open vStorage technology.

Some interesting fields and industries to follow are consumer goods (smart thermostats, IP cameras, toys, …), automotive and healthcare.

Software for the hybrid cloud future

cloudfutureThe demand for data storage shows no sign of slowing down. According to IDC, in 2012, external disk storage capacity rose 27 percent over the previous year to reach over 20 exabytes worth a staggering $24 billion in annual spending. Media and entertainment as well as web based services are consuming vast quantities of data storage and the arrival of big data analytics is accelerating the growth further. But quantity of storage is simply not enough, especially as organisations increasingly need to share data or make it available for inspection by third parties such as regulators or partners.

Organisations with the largest data repositories are looking at spiralling costs. The requirements to have data available secure and in a form that can undergo analysis through technologies such as Hadoop are becoming daunting.

The largest cloud providers such as Amazon, Google and Microsoft are using their economies of scale to offer ‘pay as you go’ type business models. But even their relatively low per GB costs quickly add up as data volumes continue to grow without expiry dates. It is also clear that certain use cases do not work with basic cloud services.

For example delivering VDI directly from Amazon S3 is feasible but probably not economically viable. While storing patient data in a public cloud would cause all kinds of compliance concerns for healthcare providers. Even traditional relational database driven applications are better suited to big iron on premise than clouds. In fact, the examples where moving data into the cloud offers fewer benefits than on premise are numerous.

Instead, the hybrid model is starting to become the preferred route for organisations that need some data to be close and performing and other sets to reside in cheaper and slower clouds. The building of these hybrid clouds is an early example of the power of software defined storage.

Let’s provide a use case that many will recognise. A large enterprise has a production environment with several critical applications including a transactional database, an ERP system and several web based applications. The enterprise could be within financial services, life sciences, M&E – it doesn’t really matter which industry as the most critical element is the petabyte of data spread between real-time and archive stores that the business needs to run its day-to-day operations and longer term commitments.

Like many large enterprises, it maintains its own data centre and several regional ICT hubs as well as a disaster recovery data centre. This disaster recovery position is kept perfectly synced with the main centre with the ability to failover rapidly to ensure that the business can continue even with a physical site issue such as fire, flood or similar disruption.

Sound familiar – it should – as the setup is used by enterprises across the globe. Enterprises spend trillions of dollars on redundant IT infrastructure which is often never used but still critical to guard against failure. In an ideal world, organisations could instead replicate data into a cloud and spin up a replacement environment only when needed. A complete production environment that would pop up only on demand based on an accurate snapshot of the current data position of the live environment.

Software Defined Storage and Open vStorage in particular has the capability to make this hybrid DR use case finally viable. As production environments move from physical to entirely virtualised workloads, SDS technologies that use time based storage allow for zero copy snapshot, cloning, and thin provisioning that are ideal for spinning up VMs from master templates in a fast and very efficient way. Better still, testing an entire disaster recovery position in a cloud would be relatively straight forward and repeatable.

But let’s not get ahead of ourselves. The technology is here and small scale examples for these hybrid clouds used for disaster recovery are starting to appear. However, the larger enterprises have entrenched positions and huge investments in the status quo so have less desire to move. However, the up and coming enterprises that have already embraced cloud and virtualisation will be able to benefit further from software defined storage in ways that will reduce their costs giving them a competitive advantage that rivals will be forced to counter. DR is just one use case where software defined will shake up the industry and from small scale examples that we are working on with our partners, the theories will be tested and the cost savings proven in the real world.

In our view, the future is bright, and the cloud future is certainly hybrid!