NSM and ABM, Arakoon teamwork

In an earlier post we shed some light on Arakoon, our own always consistent distributed key-valuedatabase. Arakoon is used in many parts of the Open vStorage platform. One of the use cases is to store the metadata of the native ALBA object store. Do note that ALBA is NOT a general purpose object store but specifically crafted and optimized for Open vStorage. ALBA uses a collection of Arakoon databases to store where and how objects are stored on the disks in the backend. Typically the SCOs and TLogs of each vDisk end up in a separate bucket, a namespace, on the backend. For each object in the namespace there is a manifest that describes where and how the object is stored on the backend. To glue the namespaces, the manifests and the disks in the backend together, ALBA uses 2 types of Arakoon databases: the ALBA Backend Manager (ABM) and one or more NameSpace Managers (NSM).

ALBA Manager

The ALBA Manager (ABM) is the entry point for all ALBA clients which want to store or retrieve something from the backend. The ALBA Manager DB knows which physical disks belong to the backend, which namespaces exist and on which NSM hosts they can be found.
To optimize the Arakoon DB it is loaded with the albamgr plugin, a collection of specific ABM user functions. Typically there is only a single ABM manager in a cluster.

NSM

A NameSpace Manager (NSM) is an Arakoon cluster which holds the manifests for the namespaces assigned to the NSM. Which NSM is managing which namespaces is registered with the ALBA Manager. The NSM is also the remote API offered by the NSM host to manipulate most of the object metadata during normal operation. Its coordinates can be retrieved from the ALBA Manager by (proxy) clients and maintenance agents.

To optimize the Arakoon DB it is loaded with the nsm_host plugin, a collection of specific NSM host user functions. Typically there are multiple NSM clusters for a single ALBA backend. This allows to scale the backend both capacity and performance wise.

IO requests

Let’s have a look at the IO path. Whenever the Volume Driver needs to store an object on the backend, a SCO or a TLog, it hands the object to one of the ALBA proxies on the same host. The ALBA proxy contains an ALBA client which communicates with the ABM to know on which NSM and disks it can store the object. Once the object is stored on the disks, the manifest with the metadata is registered in the NSM. For performance reasons the different fragment of the object and the manifest can be cached by the ALBA proxy.

In case the Volume Driver needs data from the backend, because it is no longer in the write buffer, it request the proxy to fetch the exact data by asked for a SCO location and offset. In case the right fragment are in the fragment cache, the proxy returns the data immediately to the Volume Driver. Otherwise it can use the manifest from the cache or the manifest isn’t in the cache, the proxy contacts the ABM to get the right NSM and from that the manifest. Based upon the manifest the ALBA client fetches the data it needs from the physical disks and provides it to the Volume Driver.

Arakoon, a battle hardened key-value DB

Arakoon LogoAt Open vStorage we just love Arakoon, our in-house developed key-value DB. It is always consistent and hence prefers to die instead of giving you wrong data. Thrust us, this is a good property if you are building a storage platform. It is also pretty fast, especially in a multi-datacenter topology. And above all, it is battle hardened over 7 years and it is now ROCK. SOLID.

We use Arakoon almost everywhere in Open vStorage. We use it to store the framework model, volume ownership and to keep track of the ALBA backend metadata. So it is time we tell you a bit more about that Arakoon beast. Arakoon is developed by the Open vStorage team and the core has been made available as open-source project on GitHub. It is already battle proven in several of the Open vStorage solutions and projects by technology leaders such as Western Digital and iQIYI, a subsidiary of Baidu.

Arakoon aims to be easy to understand and use, whilst at the same time taking the following features into consideration:

  • Consistency: The system as a whole needs to provide a consistent view on the distributed state. This stems from the experience that eventual consistency is too heavy a burden for a user application to manage. A simple example is the retrieval of the value for a key where you might receive none, one or multiple values depending on the weather conditions. The next question is always: Why don’t I a get a result? Is it because there is no value, or merely because I currently cannot retrieve it?
  • Conditional and Atomic Updates: We don’t need full blown transactions (would be nice to have though), but we do need updates that abort if the state is not what we expect it to be. So at least an atomic conditional update and an atomic multi-update are needed.
  • Robustness: The system must be able to cope with the failure of individual components, without concessions to consistency. However, whenever consistency can no longer be guaranteed, updates must simply fail.
  • Locality Control: When we deploy a system over 2 datacenters, we want guarantees that the entire state is indeed present in both datacenters. This is something we could not get from distributed hash tables using consistent hashing.
  • Healing & Recovery: Whenever a component dies and is subsequently revived or replaced, the system must be able to guide that component towards a situation where that node again fully participates. If this cannot be done fully automatically, then human intervention should be trivial.
  • Explicit Failure: Whenever there is something wrong, failure should propagate quite quickly.

Sounds interesting, right? Let’s share some more details on the Arakoon internals. It is a distributed key/value database. Since it is strongly consistent, it prefers to stop instead of providing out-dated, faulty values even in case multiple components fail. To achieve this Arakoon uses a variation of the Paxos algorithm. An Arakoon cluster consists of a small set of nodes that all contain the full range of key-value pairs in an internal DB. Next to this DB each node contains the transaction log and a transaction queue. While each node of the cluster carries all the data, yet only one node is assigned to be the master node. The master node manages all the updates for all the clients. The nodes in the cluster vote to select the master node. As long as there is a majority to select a master, the Arakoon DB will remain accessible. To make sure the Arakoon DB can survive a datacenter failure the nodes of the cluster are spread across multiple datacenters.

The steps to store a key in Arakoon

Whenever a key is to be stored in the database, following flow is executed:

  1. Updates to the Arakoon database are consistent. An Arakoon client always looks up the master of a cluster and then sends a request to the master. The master node of a cluster has a queue of all client requests. The moment that a request is queued, the master node sends the request to all his slaves and writes the request in the Transaction Log (TLog). When the slaves receive a request, they store this also in their proper TLog and send an acknowledgement to the master.
  2. A master awaits for the acknowledgements of the slaves. When he receives an acknowledgement of half the nodes plus one, the master pushes the key/value pair in its database. In a five node setup (one master and four slaves), the master must receive an acknowledgement of two slaves before he writes his data to the database, since he is also taken into account as node.
  3. After having written his data in his database, the master starts the following request in his queue. When a slave receives this new request, the slaves first write the previous request in their proper database before handling the new request. This way a slave is always certain that the master has successfully written the data in his proper database.

The benefits of using Arakoon

Scalability

Since the metadata of the ALBA backends gets sharded across different Arakoon clusters, scaling the metadata, capacity or performance wise, is as simple as adding more Arakoon nodes. The whole platform has been designed to store gigabytes of metadata without the metadata being a performance bottleneck.

High Availability

It is quite clear that keeping the metadata safe is essential for any storage solution. Arakoon is designed to be used in high available clusters. By default it stores 3 replicas of the metadata but for extra resilience 5-way replication or more can also be configured. These replica’s can even be stored across locations, allowing for a multi-site block storage cluster which can survive a datacenter loss.

Performance

Arakoon was designed with performance in mind. OCaml was selected as programming language for its reliability and performance. OCaml provides powerful and succinct concurrency (cooperative multitasking), a must in distributed environments. To further boost performance a forced master capability is available which makes sure metadata reads are being served by local Arakoon nodes in case of a multi-site block storage cluster. With Arakoon the master node is local so it has a sub-millisecond latency. As an example, Cassandra, another distributed DB which is used in many projects, requires read consistency by reading the data from multiple datacenters. This leads to a latency that is typically higher than 10 milliseconds.

Int32 serialization in OCaml

Today, I’m going to talk a bit about the problem of serializing an int32 in OCaml. As I’m only working on Intel machines, I’m not interested in portability, and prefer little-endian serialization. This should be natural and easy.

The interface

val set32: string -> int -> int32 -> unit
val get32: string -> int -> int32

The microbenchmark

We’re going to store an int32 into a string,retrieve it, and check if it’s the same. We’re going to do this 1_000_000_000 times, see how long it took, and calculate the speed.

let benchmark n =
  let t0 = Unix.gettimeofday() in
  let s = String.create 4 in
  let limit = Int32.of_int n in
  let rec loop i32 =
    if i32 = limit
    then ()
    else
      let () = set32 s 0 i32 in
      let j32 = get32 s 0 in
      assert (i32 = j32);
      loop (Int32.succ i32)
  in
  let () = loop 0l in
  let t1 = Unix.gettimeofday () in
  let d = t1 -. t0 in
  let speed = float n /. d in
  let megaspeed = speed /. 1000000.0 in
  Printf.printf "%i took %f => %fe6/s\n" n d megaspeed


Attempt 0: Naive implementation

This is rather straight forward: mask, extract the char, store, shift and repeat. Retrieving the int32 from the string is the opposite. No rocket surgery here.
This is simple, readable code.

let set32_ocaml s pos (i:int32) =
  let (>:) = Int32.shift_right_logical in
  let (&:) = Int32.logand in
  let mask = Int32.of_int 0xff in
  let to_char v = Char.chr (Int32.to_int v) in
  let too_far = pos + 4 in
  let rec loop p i =
    if p = too_far
    then ()
    else
      let vp = i &: mask in
      let cp = to_char vp in
      let () = s.[p] <- cp in
      loop (p+1) (i >: 8)
  in
  loop pos i


let get32_ocaml s pos =
  let (<:) = Int32.shift_left in
  let (|:) = Int32.logor in
  let to_i32 c = Int32.of_int (Char.code c) in
  let rec loop acc p =
    if p < pos
    then acc
    else
      let cp = s.[p] in
      let vp = to_i32 cp in
      let acc' = (acc <: 8) |: vp in
      loop acc' (p-1)
  in
  loop 0l (pos + 3)

OCaml is a nice high level language, but this bit twiddling feels rather clumsy and ugly.
Anyway, let’s benchmark it.

Strategy Speed
naive OCaml 16.0e6/s

A quick peek at how Thrift does it

let get_byte32 i b = 255 land (Int32.to_int (Int32.shift_right i (8*b)))
class trans = object(self)
  val ibyte = String.create 8
  ...
  method writeI32 i =
    let gb = get_byte32 i in
    for i=0 to 3 do
      ibyte.[3-i] <- char_of_int (gb i)
    done;
    trans#write ibyte 0 4

Ok, this uses the same strategy; but there’s a for loop there. The conversion is done in the ibyte buffer and then copied along. It’s a bit sub-awesome, but the extra copy of 4 bytes shouldn’t be too costly neither.

Attempt 1: But in C, it would be way faster

It’s a platitude I hear a lot, but in this case, it really should be faster. After all, if you want to retrieve an int32 from a string, all you need to do is to cast the char* to an int32_t* and de-reference the value.

Let’s try this:

external set32 : string -> int -> int32 -> unit = "zooph_set32"
external get32 : string -> int -> int32         = "zooph_get32"
#include <stdint.h>
#include <stdio.h>
#include <caml/alloc.h>
#include <caml/memory.h>
#include <caml/mlvalues.h>

value zooph_set32(value vs, value vpos, value vi){
  CAMLparam3(vs, vpos, vi);
  char* buf = String_val(vs);
  int pos = Int_val(vpos);
  int32_t i = Int32_val(vi);

  char* buf_off = &buf[pos];
  int32_t* casted = (int32_t*)buf_off;
  casted[0] = i;
  CAMLreturn (Val_unit);
}

value zooph_get32(value vs,value vpos){
    CAMLparam2(vs,vpos);
    CAMLlocal1(result);
    char* buf = String_val(vs);
    int pos = Int_val(vpos);
    char* buf_off = &buf[pos];
    int32_t* casted = (int32_t*)buf_off;
    int32_t i32 = casted[0];
    result = caml_copy_int32(i32);
    CAMLreturn(result);
}

I called my compilation unit zooph.c an onomatopoeia that pays tribute to how fast I expect this to be. There’s no loop, and the machine has the skills to do the transformation in one step. So it should roughly be about 4 times faster.
Let’s benchmark it.

Strategy Speed
naive OCaml 16.0e6
C via FFI 32.3e6

Hm… it’s faster allright, but it’s also a bit disappointing. So what went wrong?

A quick look at the assembly code reveals a lot:

zooph_set32:
.LFB34:
	.cfi_startproc
	movl	8(%rdx), %eax
	sarq	%rsi
	movslq	%esi, %rsi
	movl	%eax, (%rdi,%rsi)
	movl	$1, %eax
	ret
	.cfi_endproc
.LFE34:
	.size	zooph_set32, .-zooph_set32
	.p2align 4,,15
	.globl	zooph_get32
	.type	zooph_get32, @function
zooph_get32:
.LFB35:
	.cfi_startproc
	pushq	%rbx
	.cfi_def_cfa_offset 16
	.cfi_offset 3, -16
	movq	%rsi, %rdx
	sarq	%rdx
	subq	$160, %rsp
	.cfi_def_cfa_offset 176
	movslq	%edx, %rdx
	movq	caml_local_roots(%rip), %rbx
	leaq	8(%rsp), %rcx
	movq	%rdi, 8(%rsp)
	movl	(%rdi,%rdx), %edi
	movq	%rsi, (%rsp)
	movq	$1, 32(%rsp)
	movq	%rcx, 40(%rsp)
	leaq	(%rsp), %rcx
	movq	%rbx, 16(%rsp)
	movq	$2, 24(%rsp)
	movq	$0, 152(%rsp)
	movq	%rcx, 48(%rsp)
	leaq	16(%rsp), %rcx
	movq	$1, 96(%rsp)
	movq	$1, 88(%rsp)
	movq	%rcx, 80(%rsp)
	leaq	80(%rsp), %rcx
	movq	%rcx, caml_local_roots(%rip)
	leaq	152(%rsp), %rcx
	movq	%rcx, 104(%rsp)
	call	caml_copy_int32
	movq	%rbx, caml_local_roots(%rip)
	addq	$160, %rsp
	.cfi_def_cfa_offset 16
	popq	%rbx
	.cfi_def_cfa_offset 8
	ret
	.cfi_endproc

While zooph_set32 seems to be in order, its counter part is rather messy. On closer inspection, not even the set32 side is optimal. OCaml’s FFI allows smooth (at least compared to jni) interaction with native code in other languages, it also installs a firm border across which no inlining is possible (not with OCaml that is).

Let’s take a look at how the benchmark code calls this.

.L177:
	movq	%rbx, 8(%rsp)
	movq	%rax, 0(%rsp)
	movq	$1, %rsi
	movq	16(%rbx), %rdi
	movq	%rax, %rdx
	movq	zooph_set32@GOTPCREL(%rip), %rax
	call	caml_c_call@PLT
.L179:
	movq	caml_young_ptr@GOTPCREL(%rip), %r11
	movq    (%r11), %r15
	movq	$1, %rsi
	movq	8(%rsp), %rax
	movq	16(%rax), %rdi
	movq	zooph_get32@GOTPCREL(%rip), %rax
	call	caml_c_call@PLT
.L180:
	movq	caml_young_ptr@GOTPCREL(%rip), %r11
	movq    (%r11), %r15
	movslq	8(%rax), %rax
	movq	0(%rsp), %rdi
	movslq	8(%rdi), %rbx
	cmpq	%rax, %rbx
	je	.L176

You see stuff being pushed on the stack before the call. For raw speed, you don’t want this to happen. For raw speed, you don’t even want a call.
To get there, you need to translate the benchmark to C too. I’m not going to bother, because I have another trick ready.

Attempt 2: OCaml 4.01 primitives

OCaml 4.01 got released recently, and there’s a little entry in the release notes.

PR#5771: Add primitives for reading 2, 4, 8 bytes in strings and bigarrays
(Pierre Chambart)

However, for some reason, they are not really exposed, and I had to dig to find them. Using them however is trivial.

external get32_prim : string -> int -> int32         = "%caml_string_get32"
external set32_prim : string -> int -> int32 -> unit = "%caml_string_set32"

That’s all there is to it. Basically, you say that you know that the compiler knows how to do this, and that from now on, you want to do that too.
Let’s benchmark it.

Strategy Speed
naive OCaml 16.0e6
C via FFI 32.3e6
OCaml with primitives 139e6

Wow.

Closing words

I’ve put the code for this on github: https://github.com/toolslive/int32_blog Anyway, we need to (de)serialize int64 values as well. Determining the speedup there is left as an exercise for the reader (tip: it’s even better).

I think some people will feel the urge to apply this to their serialization code as well.

Have fun,

Romain.