Skip to content

Instantly share code, notes, and snippets.

@dormando
Created May 30, 2018 00:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dormando/910134e85279710b970bd2c8af88bc65 to your computer and use it in GitHub Desktop.
Save dormando/910134e85279710b970bd2c8af88bc65 to your computer and use it in GitHub Desktop.
memcached AWS quick test
1x r4.4xlarge - target
1x c5.4xlarge - source
set to shared instead of dedicated? not sure what the original tests were.
server built from master (1.5.8)
start args:
./memcached -t 14 -m 4000
mc-crusher first test:
cmd_get avg/s ( 4/s sample): 3743798.26587109 [3737161.72369726]
config:
send=ascii_get,recv=blind_read,conns=50,key_prefix=foobar,key_prealloc=0,pipelines=8
send=ascii_set,recv=blind_read,conns=10,key_prefix=foobar,key_prealloc=0,pipelines=4,stop_after=200000,usleep=1000,value_size=10
send=ascii_get,recv=blind_read,conns=50,key_prefix=foobar,key_prealloc=0,pipelines=8,thread=1
send=ascii_get,recv=blind_read,conns=50,key_prefix=foobar,key_prealloc=0,pipelines=8,thread=1
send=ascii_get,recv=blind_read,conns=50,key_prefix=foobar,key_prealloc=0,pipelines=8,thread=1
just barely under 4 cores for the source host. could push a bit more with -t 15 on the server and a 5th server thread.
That's 3.7m reads/sec.
mc-crusher mget test:
get_hits avg/s ( 4/s sample): 8650179.84689241 [8631547.14019354]
config:
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1
mc-crusher's hitting one core limit on source host.
changing config to:
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1,thread=1
... is getting:
get_hits avg/s ( 4/s sample): 14001498.4202697 [13927306.9099005]
that's 14M ops/sec.
mc-crusher is still at ~200% CPU, though memcached is mostly out of CPU. Added a third thread, and:
get_hits avg/s ( 4/s sample): 15159820.8825091 [15250279.4043931]
that's 15M ops/sec.
Added some sets in, without stop_after:
send=ascii_set,recv=blind_read,conns=20,key_prefix=foobar,key_prealloc=0,pipelines=2,usleep=200,value_size=10
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1,thread=1
send=ascii_mget,recv=blind_read,conns=50,mget_count=50,key_prefix=foobar,key_prealloc=1,thread=1
cmd_set avg/s ( 4/s sample): 161833.610922406 [165278.414289881]
get_hits avg/s ( 4/s sample): 13230508.3973327 [13184989.118056]
... so you can see sets are fairly expensive. I can dial it up/down a bunch.
TL;DR:
3.7M keys/sec for pure read load without packet batching on responses.
15M keys/sec for pure read load from r4.4xlarge instance.
13M with 160k sets/sec as well.
fair amount of connections.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment