Redis Remote Procedure Calls (RPC)

While I’m not sure if it is a good or really stupid idea I decided to start trying to implement a RPC library on the redis database.

In theory it should be a good idea as you can have a very fast and persistent multi-consumer queue of tasks (like rabbit-mq or other AMQP) which can be worked out to also give answers back. Also using redis permits to quickly distribute work over multiple servers by simply running another server without any change to the code itself.

If you want to check REPC (yes, this is the only name I have been able to think of…), you can take a look at https://github.com/amol-/repc

Serialization format for data is JSON just because it was something quick to have. Currently my main target is trying to have something with a very small code footprint and reliable enough to be used.

Redis and MongoDB insertion performance analysis

Recently we had to study a software where reads can be slow, but writes need to be as fast as possible. Starting from this requirement we thought about which one between redis and mongodb would better fit the problem. Redis should be the obvious choice as its simpler data structure should make it light-speed fast, and actually that is true, but we found a we interesting things that we would like to share.

This first graph is about MongoDB Insertion vs Redis RPUSH.
Up to 2000 entries the two are quite equivalent, then redis starts to get faster, usually twice as fast as mongodb. I expected this, and I have to say that antirez did a good job in thinking the redis paradigm, in some situations it is the perfect match solution.
Anyway I would expect mongodb to be even slower relating to the features that a mongodb collection has over a simple list.

This second graph is about Redis RPUSH vs Mongo $PUSH vs Mongo insert, and I find this graph to be really interesting.
Up to 5000 entries mongodb $push is faster even when compared to Redis RPUSH, then it becames incredibly slow, probably the mongodb array type has linear insertion time and so it becomes slower and slower. mongodb might gain a bit of performances by exposing a constant time insertion list type, but even with the linear time array type (which can guarantee constant time look-up) it has its applications for small sets of data.

I would like to say that this benchmarks have no real value, as usual, and have been performed just for curiosity

You can find here the three benchmarks snippets

import redis, time
MAX_NUMS = 1000

r = redis.Redis(host='localhost', port=6379, db=0)
del r['list']

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    r.rpush('list', i)
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start
import pymongo, time
MAX_NUMS = 1000

con = pymongo.Connection()
db = con.test_db
db.testcol.remove({})
db.testlist.remove({})

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    db.testlist.insert({'v':i})
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start
import pymongo, time
MAX_NUMS = 1000

con = pymongo.Connection()
db = con.test_db
db.testcol.remove({})
db.testlist.remove({})
oid = db.testcol.insert({'name':'list'})

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    db.testcol.update({'_id':oid}, {'$push':{'values':i}})
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start