Skip to content

Instantly share code, notes, and snippets.

@PatrickDehkordi
Last active September 1, 2016 18:35
Show Gist options
  • Save PatrickDehkordi/c8789411fa8dd29feb3ce9aa1df1398e to your computer and use it in GitHub Desktop.
Save PatrickDehkordi/c8789411fa8dd29feb3ce9aa1df1398e to your computer and use it in GitHub Desktop.
OpenOnload Memcached Tuning Whitepaper setup details
Create a file named memcached.opf in /usr/libexec/onload/profiles/
Adjust the appropriate Onload tuning variables in memcached.opf for example:
onload_set EF_POLL_USEC=100000
onload_set EF_POLL_SPIN=1
onload_set EF_EPOLL_SPIN=1
onload_set EF_SELECT_SPIN=1
onload_set EF_STACK_PER_THREAD=1
onload_set EF_NONAGLE_INFLIGHT_MAX=4
onload_set EF_TCP_SEND_SPIN=1
onload_set EF_TCP_RECV_SPIN=1
onload_set EF_PKT_WAIT_SPIN=1
Set the firmware vairant to low latency:
%sfboot firmware-variant=ultra-low-latncy
Invoke memcached server and client with onload and profile set:
onload --profile=memcached memcached <options> -o hashpower=30
The hash table should be large enough that so it doens't require resizing for the benchmark.
For further tuning:
1) Run benchmark with 0% set operations with N CPU cores.
2) Rerun benchmark with realistic % of set operations for the usecase (many use cases have <<1% set operations)
If the results of 2 are under expectations or drop comparing to 1 is by a factor of N,
use N+1 number of separate instances of memcached.
For example: drop of 7x would imply 8 memcached server instances of 8 threads each.
(It is assumed that clents understand server key distribution)
3) Increase the number of CPU cores in Step 1 and repeat the test to reailze the number of needed cores.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment