Deploy an Open vStorage cluster with Ansible

ansible_logoAt Open vStorage we build large Open vStorage clusters for customers. To prevent errors and cut-down the deployment time we don’t set up these clusters manually but we automate the deployment through Ansible, a free software platform for configuring and managing IT environments.

Before we dive into the Ansible code, let’s first have a look at the architecture of these large clusters. For large setups we take the converged (HyperScale as we call it) approach and split up storage and compute in order to scale compute and storage independently. From experience we have learned that storage on average grows 3 times as fast as compute.

We also use 3 types of nodes: controllers, compute and storage nodes.

  • Controllers: 3 dedicated, hardware optimized nodes to run the master services and hold the distributed DBs. There is no vPool configured on these nodes so no VMs are running on them.These nodes are equipped with a couple of large capacity SATA drives for scrubbing.
  • Compute: These nodes run the extra services, are configured with vPools and run the VMs. We typically use blades or 1U servers for these servers as they are only equipped with SSDs or PCIe flash cards.
  • Storage: The storage servers, 2U or 4U, are equipped with a lot of SATA drives but have less RAM and CPU.

The below steps will teach you how to setup an Open vStorage cluster through Ansible (Ubuntu) on these 3 types of nodes. Automating Open vStorage can of course also be achieved in a similar fashion with other tools like Puppet or Chef.

  • Install Ubuntu 14.04 on all servers of the cluster. Username and password should be the same on all servers.
  • Install Ansible on a pc or server you can use as Control Machine. The Control Machine is used to send instructions to all hosts in the Open vStorage cluster. Note that the Control Machine should not be part of the cluster so it can later also be used for troubleshooting the Open vStorage cluster.

    sudo apt-get install software-properties-common
    sudo apt-add-repository ppa:ansible/ansible
    sudo apt-get update
    sudo apt-get install ansible

  • Create /usr/lib/ansible, download the Open vStorage module to the Control Machine and put the module in /usr/lib/ansible.

    mkdir /opt/openvstorage/
    cd /opt/openvstorage/
    git clone -b release1.0 https://github.com/openvstorage/dev_ops.git
    mkdir /usr/lib/ansible
    cp dev_ops/Ansible/openvstorage_module_project/openvstorage.py /usr/lib/ansible

  • Edit the Ansible config file (/etc/ansible/ansible.cfg) describing the library. Uncomment it and change it to /usr/lib/ansible

    vim /etc/ansible/ansible.cfg

    ##change
    #inventory = /etc/ansible/hosts
    #library = /usr/share/my_modules/

    ##to
    inventory = /etc/ansible/hosts
    library = /usr/lib/ansible

  • Edit the Ansible inventory file (/etc/ansible/hosts) and add the controller, compute and storage nodes to describe the cluster according to the below example:

    #
    # This is the default ansible 'hosts' file.
    #

    #cluster overview

    [controllers]
    ctl01 ansible_host=10.100.69.171 hypervisor_name=mas01
    ctl02 ansible_host=10.100.69.172 hypervisor_name=mas02
    ctl03 ansible_host=10.100.69.173 hypervisor_name=mas03

    [computenodes]
    cmp01 ansible_host=10.100.69.181 hypervisor_name=hyp01

    [storagenodes]
    str01 ansible_host=10.100.69.191

    #cluster details
    [cluster:children]
    controllers
    computenodes
    storagenodes

    [cluster:vars]
    cluster_name=cluster100
    cluster_user=root
    cluster_password=rooter
    cluster_type=KVM
    install_master_ip=10.100.69.171

  • Execute the Open vStorage HyperScale playbook. (It is advised to execute the playbook in debug mode -vvvv)

    cd /opt/openvstorage/dev_ops/Ansible/hyperscale_project/
    ansible-playbook openvstorage_hyperscale_setup.yml -k -vvvv

The above playbook will install the necessary packages and run ‘ovs setup’ on the controllers, compute and storage nodes. Next steps are assigning roles to the SSDs and PCIe flash cards, create the backend and create the first vPool.

2015, the Open vStorage predictions

New Year Wallpaper.The end of 2014 is near so it is time to look forward and see what 2015 will have to offer. Some people say there’s no point in making predictions and that it’s not worth speculating because nothing is set in stone and things change all the time in storage land. Allow us to prove these people wrong by sharing our 2015 storage predictions*:

Acceleration of (hyper-)converged platforms
Converged platforms are here to stay. Converged solutions are even the fastest growing segment for large storage vendors. But it will be players like Open vStorage who will really break through in 2015. Hyperconverged solutions showed that there is an alternative to expensive SAN or all-flash solutions by adding a software based caching layer to the host. Alas, these overhyped hyperconverged solutions are even more expensive per TB storage than an already expensive all-flash array. In 2015 we will see more solutions which unite the good of the hyperconverged appliance and the converged platform but at a significantly lower cost. Storage solutions that will be extremely hot and prove to be future proof will have to have following characteristics:

  • Caching inside the (hypervisor) host: caching on SSD or PCIe flash should be done inside the host and not a couple of hops down the network.
  • Scalability: all-flash will continue to be a waste of money due to the huge cost of flash. It is better to go for a Tiered solution: Tier 1 on flash, Tier 2 on scalable, cheap (object) storage. In case the Tier 1 and Tier 2 storage are inside the same appliance (hyperconverged), your scalability and flexibility will be limited. A much better solution is to keep the 2 Tiers completely separate in a different set of appliances but managed by the same software layer.
  • Open and programmatically: storage silo’s should in 2015 be something of the past. Integration and openness should be key. Automation will be one of the hottest and most important features of a storage solution.

It should not come as a surprise that Open vStorage checks all of the above requirements.

OpenStack will dominate the cloud in 2015
This is probably the most evident prediction. During the OpenStack conference in Paris it was very clear that OpenStack will dominate the cloud the next few years. In 2014 some new kids showed support for OpenStack such as VMware (they understand that hypervisor software is now a commodity and that the data center control plane has become the high-margin battle arena). With VMware releasing their own OpenStack distribution the OpenStack distribution battlefield will be crowded in 2015. We have RedHat, Ubuntu, Mirantis, HP, VMware and many more so it is safe to say that some consolidation will happen in this area.
A new OpenStack battlefield that will emerge in 2015 will be around OpenStack storage. Currently this area is being dominated by the traditional arrays but as the software-defined storage solutions gain traction, solutions such as Open vStorage will grab a huge market share from these traditional vendors. They can compete with these SANs and all-flash arrays as they offer the same features and have the benefit of a much lower cost and TCO. While they maybe not top the total revenue achieved by the big vendors, they will definitely seize a large share of the OpenStack storage market.
If we add the fact that the Chinese government is promoting OpenStack and open-source in general, you can bet your Christmas bonus on the fact that open-source (OpenStack) storage projects (Swift, Open vStorage, …) will be booming next year. These projects will get a lot of support from Chinese companies both in adoption and development. It will be essential for traditional high-tech companies and countries not to miss the boat as once it has left the harbor it will be very hard to catch up.

New markets for object storage vendors
2015 will be the year where object storage will break out of its niche market of large video or object repositories. This has been said for many years now but 2015 will be THE year as many companies have started to realize which benefits they achieved by implementing their first object storage projects. The next target for these enterprises is to make better use of their current object storage solution. Changing all of their legacy code will not happen in 2015 as this might impact their business. Solutions where they don’t have to change their existing code base and still benefit from the cost saving of object storage will be selected. Open vStorage is one of those solutions but we are pretty sure other solutions like for example storage gateways to object storage will flourish in 2015.
Another reason why object storage vendors will enter new markets is because currently too many players are after the same customer base. This means that if they want to keep growing and provide ROI for the venture capital invested, new markets will definitely need to be addressed. The 15-25 billion dollar SAN market is a logical market to address. But entering this market will not be a bed of roses as object storage vendors have no experience in this highly-competitive market or sometimes not even the right sales competencies and knowledge. They will have to look for partnerships with companies such as CloudFounders who are seasoned in this area.

Seagate Kinetic drives
The Kinetic drives are the most exciting, fundamental change in the storage industry in several years. These drives went GA at the end of 2014 but in 2015 we will gradually see new applications and solutions who make use of this technology. With these IP drives, you will, for the first time, be able to manage storage as a scalable pool of disks. Open vStorage will support the Kinetic drives as Tier 2 backend. This means Virtual Machine will have their hot data inside the host on flash and their cold data on a pool of Kinetic drives.

* We will look back on this post at the end of 2015 to see how good we scored.

Why both converged storage and external storage make sense

Today the peace in storageland has been disturbed by a post of Storage Swiss, The Problems With Server-Side Storage, Like VSAN. In this post they highlight the problems with VSAN. At the same time Coho Data, developer of flash-tuned scale-out storage appliances, released a post about why converged storage, such as VSAN, only seems to be usefull in a niche area. You can read the full post here. VMware had to respond these “attacks” and released a post countering the problems raised by Storage Swiss. As with every black-and-white story both sides have valid points, so we took the liberty to summarize these valid points. On top, we believe both sides can play a role in your storage need and that is why we believe Open vStorage, as only solution in storage land, is the solution by allowing to mix and match converged storage and external storage.

Valid points for the converged stack:

  • VSAN and other converged software stacks typically can run on almost any hardware as they are software based.
  • Pricing for a software based solution to give the same reliability as a hardware based solution can be a fraction of the cost and will become commodity over time. Take as example traditional storage redundancy and fault tolerance which is typically implemented via dual controller hardware systems. Software based techniques provide the same level of reliability at much lower costs and much better scaling.
  • Converged stacks are VM-centric, treating a single volume as a LUN. This allows for flexibility and cost reduction. Test tiers can only have a limited or no protection and important volumes might be replicated multiple times across the storage pool and run with best performance.
  • The network in a converged stack is also less complex as only the hosts need to be connected. With external storage appliances you also need to take the dedicated storage network into account.
  • Easy to setup and manage, even for less experienced IT support. This can well be the case in a branch office.
  • Running storage as close as possible to the compute power makes sense. If not, we all would be using Amazon S3 to power our VM storage needs.

Valid points for the external storage stack:

  • External storage arrays take resource intensive tasks upon themself. It allows for example processor and network resources to manage replication and recovery.
  • External storage arrays allow to be linked to different hypervisors while converged infrastructure solutions are tightly linked to the hypervisor. For example VSAN can only be used with the VMware ESXi hypervisor.
  • The storage network can be completely separated from the computer and outbound traffic.
  • Better usage of flash drives as they are shared between multiple hypervisors and writes don’t have to be mirrored to a second appliance.

The Open vStorage view:
Open vStorage believes there isn’t a ‘One Size Fits All’ solution to storage demands. Today, specific applications and functionalities demand specific storage. That is why Open vStorage is configured to work as a truly open solution and works with any storage backend. Open vStorage provides the flexibility of using an existing distributed file system, external storage appliances and even object stores. This allows for more flexibility but at the same time keeps the ability to protect existing investments.

We are firm believers in the approach taken by VMware vSAN where software running on a cluster of machines provides you a reliable storage pool and where the storage intelligence is moved closer to the compute layer. However, we believe this is too important a piece of the virtualization stack for a proprietary solution that is either hypervisor specific, hardware specific, management stack specific and storage back-end specific. Our goal behind Open vStorage is not only build something open and non-proprietary but something modular enough which allows developers and hardware manufacturers to innovate on top of Open vStorage. We believe with contributors from the open source community Open vStorage can become a far superior compared to proprietary offerings whether these be hardware or software based.

To conclude Open vStorage:

  • Leverages server based flash and storage back-ends to create shared virtual machine storage
  • Redundant, shared storage without the costs and scaling problems of dual controller systems
  • Works with any storage back-end, including filesystems, NAS, S3 compatible object storage, …
  • Works with KVM and VMware
  • Delivered under the Apache License, Version 2.0