Standing at the software defined crossroads…

Cross roads horizonIt seems that the term “software defined…” is rapidly becoming the hottest topic across the IT landscape. Across networking, storage, security and compute – the idea of using commodity hardware mixed with open standards to break the monopoly of hardware suppliers will radically change the face of information technology over the next decade. Software defined fundamentally changes the economics of computing and is potentially both a blessing and a curse to the vested interests within the IT industry.

If we start with three clear evolutional trends that have progressed for the last three decades. The performance of Intel based x86 architectures have grown almost exponentially every 18 months. Alongside compute; storage capacity over density has also grown at another almost exponential rate based on a 24 month cycle. Both these critical IT elements have also become more energy efficient and physically smaller. In the background, the performance of internal data buses and external network connections has improved with speeds heading towards 100GBp/s over simple Ethernet.

In previous times, to gain a competive edge, vendors would invest in custom ASICs to make up for deficiencies in performance and features offered by Intel’s modest CPUs or limitation in storage performance – in the days before low cost SSD and flash. The rise of the custom ASIC provided the leaders across areas such as storage and networking with a clear and identifiable benefit over rivals but at the cost of expensive R&D and manufacturing process. For this edge, customers paid a price premium and were often locked into a technology path dependent on the whims of the vendor.

The seeds of change were sown with the arrival of the hypervisor. The proof point that a software layer could radically improve the utilisation and effectiveness of the humble server prompted a change of mind-set. Why not the storage layer or the network? Why are we forced to pay a premium for proprietary technologies and custom hardware? Why are we locked into these closed stacks even as Intel, AMD, ARM, Western Digital, Matsushita and others spending tens of billions to develop faster “standards based” CPU, larger hard-disks and connectivity? Why indeed!

In a hardware centric world, brand values like the old “never got fired for buying IBM” adage could make sense but well written software that can undergo rigorous testing breaks that dependency. It is still fair to say that the software defined revolution is at an early stage but if you look at the acquisition frenzy of start-ups acquired by heavyweights across networking and storage, it’s clear that the firms with the biggest vested interests are acutely aware that they need to either get on board or get out of the way.

There is one significant danger. Our software defined future needs adherence to open standards or at least an emergent standard that becomes the defacto baseline for interoperability. We are at the crossroads of a major shift and without global support for initiatives such as OpenStack there is a danger that software defined may fragment back into the vendor dominated silos of the legacy IT era. Ourselves and other pioneers in the software defined movement need to resist the temptation to close gates to open API’s and backward compatibility – we must treat the rise of software defined as a real chance to change the status quo for the benefit of both customers and an industry that thrives on true innovation.