Open vStorage 1.0.2 Beta

KVM
Open vStorage supports KVM as Hypervisor.

Swift Support
Open vStorage supports Swift (SAIO – Swift All In One, Swiftstack, …) as Storage Backend.

Configuration of a vPool through the GUI
You can now add a new vPool through the GUI to a VSA. In case the vPool is a not a local filesystem, you can also stretch it across multiple VSAs.

Protection against moving a vDisk to a VSA where it can’t be served
Each VSA can only serve only a limited amount of vDisks. In case a vDisk is moved between VSAs, f.e. due to a vMotion, the process will first check if there are enough resources available to serve the vDisk. In case there are not enough free resources on the destination VSA, the handover will be aborted.

Software for the hybrid cloud future

cloudfutureThe demand for data storage shows no sign of slowing down. According to IDC, in 2012, external disk storage capacity rose 27 percent over the previous year to reach over 20 exabytes worth a staggering $24 billion in annual spending. Media and entertainment as well as web based services are consuming vast quantities of data storage and the arrival of big data analytics is accelerating the growth further. But quantity of storage is simply not enough, especially as organisations increasingly need to share data or make it available for inspection by third parties such as regulators or partners.

Organisations with the largest data repositories are looking at spiralling costs. The requirements to have data available secure and in a form that can undergo analysis through technologies such as Hadoop are becoming daunting.

The largest cloud providers such as Amazon, Google and Microsoft are using their economies of scale to offer ‘pay as you go’ type business models. But even their relatively low per GB costs quickly add up as data volumes continue to grow without expiry dates. It is also clear that certain use cases do not work with basic cloud services.

For example delivering VDI directly from Amazon S3 is feasible but probably not economically viable. While storing patient data in a public cloud would cause all kinds of compliance concerns for healthcare providers. Even traditional relational database driven applications are better suited to big iron on premise than clouds. In fact, the examples where moving data into the cloud offers fewer benefits than on premise are numerous.

Instead, the hybrid model is starting to become the preferred route for organisations that need some data to be close and performing and other sets to reside in cheaper and slower clouds. The building of these hybrid clouds is an early example of the power of software defined storage.

Let’s provide a use case that many will recognise. A large enterprise has a production environment with several critical applications including a transactional database, an ERP system and several web based applications. The enterprise could be within financial services, life sciences, M&E – it doesn’t really matter which industry as the most critical element is the petabyte of data spread between real-time and archive stores that the business needs to run its day-to-day operations and longer term commitments.

Like many large enterprises, it maintains its own data centre and several regional ICT hubs as well as a disaster recovery data centre. This disaster recovery position is kept perfectly synced with the main centre with the ability to failover rapidly to ensure that the business can continue even with a physical site issue such as fire, flood or similar disruption.

Sound familiar – it should – as the setup is used by enterprises across the globe. Enterprises spend trillions of dollars on redundant IT infrastructure which is often never used but still critical to guard against failure. In an ideal world, organisations could instead replicate data into a cloud and spin up a replacement environment only when needed. A complete production environment that would pop up only on demand based on an accurate snapshot of the current data position of the live environment.

Software Defined Storage and Open vStorage in particular has the capability to make this hybrid DR use case finally viable. As production environments move from physical to entirely virtualised workloads, SDS technologies that use time based storage allow for zero copy snapshot, cloning, and thin provisioning that are ideal for spinning up VMs from master templates in a fast and very efficient way. Better still, testing an entire disaster recovery position in a cloud would be relatively straight forward and repeatable.

But let’s not get ahead of ourselves. The technology is here and small scale examples for these hybrid clouds used for disaster recovery are starting to appear. However, the larger enterprises have entrenched positions and huge investments in the status quo so have less desire to move. However, the up and coming enterprises that have already embraced cloud and virtualisation will be able to benefit further from software defined storage in ways that will reduce their costs giving them a competitive advantage that rivals will be forced to counter. DR is just one use case where software defined will shake up the industry and from small scale examples that we are working on with our partners, the theories will be tested and the cost savings proven in the real world.

In our view, the future is bright, and the cloud future is certainly hybrid!

Standing at the software defined crossroads…

Cross roads horizonIt seems that the term “software defined…” is rapidly becoming the hottest topic across the IT landscape. Across networking, storage, security and compute – the idea of using commodity hardware mixed with open standards to break the monopoly of hardware suppliers will radically change the face of information technology over the next decade. Software defined fundamentally changes the economics of computing and is potentially both a blessing and a curse to the vested interests within the IT industry.

If we start with three clear evolutional trends that have progressed for the last three decades. The performance of Intel based x86 architectures have grown almost exponentially every 18 months. Alongside compute; storage capacity over density has also grown at another almost exponential rate based on a 24 month cycle. Both these critical IT elements have also become more energy efficient and physically smaller. In the background, the performance of internal data buses and external network connections has improved with speeds heading towards 100GBp/s over simple Ethernet.

In previous times, to gain a competive edge, vendors would invest in custom ASICs to make up for deficiencies in performance and features offered by Intel’s modest CPUs or limitation in storage performance – in the days before low cost SSD and flash. The rise of the custom ASIC provided the leaders across areas such as storage and networking with a clear and identifiable benefit over rivals but at the cost of expensive R&D and manufacturing process. For this edge, customers paid a price premium and were often locked into a technology path dependent on the whims of the vendor.

The seeds of change were sown with the arrival of the hypervisor. The proof point that a software layer could radically improve the utilisation and effectiveness of the humble server prompted a change of mind-set. Why not the storage layer or the network? Why are we forced to pay a premium for proprietary technologies and custom hardware? Why are we locked into these closed stacks even as Intel, AMD, ARM, Western Digital, Matsushita and others spending tens of billions to develop faster “standards based” CPU, larger hard-disks and connectivity? Why indeed!

In a hardware centric world, brand values like the old “never got fired for buying IBM” adage could make sense but well written software that can undergo rigorous testing breaks that dependency. It is still fair to say that the software defined revolution is at an early stage but if you look at the acquisition frenzy of start-ups acquired by heavyweights across networking and storage, it’s clear that the firms with the biggest vested interests are acutely aware that they need to either get on board or get out of the way.

There is one significant danger. Our software defined future needs adherence to open standards or at least an emergent standard that becomes the defacto baseline for interoperability. We are at the crossroads of a major shift and without global support for initiatives such as OpenStack there is a danger that software defined may fragment back into the vendor dominated silos of the legacy IT era. Ourselves and other pioneers in the software defined movement need to resist the temptation to close gates to open API’s and backward compatibility – we must treat the rise of software defined as a real chance to change the status quo for the benefit of both customers and an industry that thrives on true innovation.