Gluster Experience (part two)

Luckily the issue I was experiencing with gluster 2.0.0rc1 was just an ugly bug squashed in the 2.0.0rc2 release. Right now I’m keeping the configuration I blogged about and now we are thinking about topologies and expansion.

Right now the big issue is trying to provide enough bandwidth for write in replication since a single Gbit link isn’t enough. It’s too late to order infiniband so I’m stuck thinking what is the best topology given we have a single writer, 70 readers, 3 storage (gluster) and about 4 24port gigabit switches with 10Gbit expansion link unused and at least 2 gigabit interfaces per node.

More will follow soon

PS: I’m wondering how hard would be trying to get a round-robin translator to accelerate replicated writes by just issuing a write from the client node to one of the N replicating nodes and then have them sync automatically by themselves…

Leave a Reply

Your email address will not be published. Required fields are marked *