Gluster Experience (part one)

Recently we started to dabble with clustering file systems, in particular a rather new and promising one called gluster

So far, even if people suggests to use the upcoming 2.0 version we found already some annoying glitches in the 2.0.0rc1, namely the writebehind capability wasn’t working at all, reducing the writing speed to 3Mb/s (on a gigabit link to a cluster of 3 nodes each one with a theoretical peak speed of 180Mb/s), luckily they fixed it in their git, sadly the peak speed for a single node is about 45Mb/s per single transfer and around 75Mb/s when aggregating 5 concurrent transfers, nfs on the same node reaches 95Mb/s on single transfer.

Since looks like there is lots of time wasted waiting somehow (as the experiment with concurrent transfer hints) we’ll probably investigate more and obviously look for advices.

The current setup uses iocache+writebehind as performance translators and maps the nodes as 6 bricks (2 bricks exported per node), replicating 3 times (one for each node) and using dht to join the 2 replicating groups.

Leave a Reply

Your email address will not be published. Required fields are marked *