At Open vStorage we build large Open vStorage clusters for customers. To prevent errors and cut-down the deployment time we don’t set up these clusters manually but we automate the deployment through Ansible, a free software platform for configuring and managing IT environments.
Before we dive into the Ansible code, let’s first have a look at the architecture of these large clusters. For large setups we take the converged (HyperScale as we call it) approach and split up storage and compute in order to scale compute and storage independently. From experience we have learned that storage on average grows 3 times as fast as compute.
We also use 3 types of nodes: controllers, compute and storage nodes.
- Controllers: 3 dedicated, hardware optimized nodes to run the master services and hold the distributed DBs. There is no vPool configured on these nodes so no VMs are running on them.These nodes are equipped with a couple of large capacity SATA drives for scrubbing.
- Compute: These nodes run the extra services, are configured with vPools and run the VMs. We typically use blades or 1U servers for these servers as they are only equipped with SSDs or PCIe flash cards.
- Storage: The storage servers, 2U or 4U, are equipped with a lot of SATA drives but have less RAM and CPU.
The below steps will teach you how to setup an Open vStorage cluster through Ansible (Ubuntu) on these 3 types of nodes. Automating Open vStorage can of course also be achieved in a similar fashion with other tools like Puppet or Chef.
- Install Ubuntu 14.04 on all servers of the cluster. Username and password should be the same on all servers.
- Install Ansible on a pc or server you can use as Control Machine. The Control Machine is used to send instructions to all hosts in the Open vStorage cluster. Note that the Control Machine should not be part of the cluster so it can later also be used for troubleshooting the Open vStorage cluster.
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
- Create /usr/lib/ansible, download the Open vStorage module to the Control Machine and put the module in /usr/lib/ansible.
git clone -b release1.0 https://github.com/openvstorage/dev_ops.git
cp dev_ops/Ansible/openvstorage_module_project/openvstorage.py /usr/lib/ansible
- Edit the Ansible config file (/etc/ansible/ansible.cfg) describing the library. Uncomment it and change it to /usr/lib/ansible
#inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
inventory = /etc/ansible/hosts
library = /usr/lib/ansible
- Edit the Ansible inventory file (/etc/ansible/hosts) and add the controller, compute and storage nodes to describe the cluster according to the below example:
# This is the default ansible 'hosts' file.
ctl01 ansible_host=10.100.69.171 hypervisor_name=mas01
ctl02 ansible_host=10.100.69.172 hypervisor_name=mas02
ctl03 ansible_host=10.100.69.173 hypervisor_name=mas03
cmp01 ansible_host=10.100.69.181 hypervisor_name=hyp01
- Execute the Open vStorage HyperScale playbook. (It is advised to execute the playbook in debug mode -vvvv)
ansible-playbook openvstorage_hyperscale_setup.yml -k -vvvv
The above playbook will install the necessary packages and run ‘ovs setup’ on the controllers, compute and storage nodes. Next steps are assigning roles to the SSDs and PCIe flash cards, create the backend and create the first vPool.