Gluster Experience (part two)

Luckily the issue I was experiencing with gluster 2.0.0rc1 was just an ugly bug squashed in the 2.0.0rc2 release. Right now I’m keeping the configuration I blogged about and now we are thinking about topologies and expansion. Right now the big issue is trying to provide enough bandwidth for write in replication since a single Gbit link isn’t enough. It’s too late to order infiniband so I’m stuck thinking what is the best topology given we have a single writer, 70 readers, 3 storage (gluster) and about 4 24port gigabit switches with 10Gbit expansion link unused and at least 2 gigabit interfaces per node. More will follow soon PS: I’m wondering how hard would be trying to get a round-robin […]

Gluster Experience (part one)

Recently we started to dabble with clustering file systems, in particular a rather new and promising one called gluster So far, even if people suggests to use the upcoming 2.0 version we found already some annoying glitches in the 2.0.0rc1, namely the writebehind capability wasn’t working at all, reducing the writing speed to 3Mb/s (on a gigabit link to a cluster of 3 nodes each one with a theoretical peak speed of 180Mb/s), luckily they fixed it in their git, sadly the peak speed for a single node is about 45Mb/s per single transfer and around 75Mb/s when aggregating 5 concurrent transfers, nfs on the same node reaches 95Mb/s on single transfer. Since looks like there is lots of time […]