Skip to content

Instantly share code, notes, and snippets.

@PharkMillups
Created April 1, 2011 23:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save PharkMillups/899066 to your computer and use it in GitHub Desktop.
Save PharkMillups/899066 to your computer and use it in GitHub Desktop.
<TB123> Hi, I'm trying to do some benchmarking on Riak
and can't load more than 0.6Gbits write (approx 3MBs messages),
bitcask storage. Is it normal, or should I concentrate on tunnig some parameters?
13:17 <TB123> there is only 4GB RAM, maybe that could be problem
13:19 <TB123> Gbit/s
13:23 <seancribbs> each message is 3MB?
13:27 <TB123> @sean: Yes.
13:28 <seancribbs> so, some questions. 1) how many nodes are you
running? 2) how are you running the benchmark
13:28 <seancribbs> 3) are you using the default n_val
13:30 <TB123> 1) currently on 1 node, to tune the configuration
and than will move to 8 nodes to test scaling b) one java PBC
client with 10 threads connecting over 1Gbit net to the other. c) yes. default
13:30 <TB123> add b) have tried to load from two different servers on one time, but with same result
13:32 <TB123> sorry, from two different client
14:02 <seancribbs> TB123: sorry for the delay. I think you will
find performance less than expected until you reach 4+ nodes
14:02 <seancribbs> we recommend 5 nodes to start, absolute minimum of 3
14:08 <TB123> @seanscribbs: OK. I'll do the same test on 6
nodes. My reason for testing on one node was to understand,
what is the throughput limit and to tune performance on one place (node).
14:10 <TB123> @seanscribbs: I assume, this 0.6Gbp/s limit is
caused by internal riak overhead, memory locks, etc.
14:22 <seancribbs> TB123: not locks per se, but yes, single-systems can become bottlenecks
14:26 <TB123> @seancribbs: Ok, understad. Thanks.
14:26 <seancribbs> when you're at multiple nodes, spread the request
load to all nodes too, perhaps with a LB
14:28 <TB123> for my benchmarking purposes, i'll connect same
number of threads to every node. I don't currently have any loadbalancer prepaired.
14:31 <seancribbs> TB123: sounds good
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment