Skip to content

Instantly share code, notes, and snippets.

@kellabyte
Last active December 3, 2015 19:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kellabyte/a5228d7327412c3e0b48 to your computer and use it in GitHub Desktop.
Save kellabyte/a5228d7327412c3e0b48 to your computer and use it in GitHub Desktop.

Max throughput benchmark

Max throughput in master

./bin/wrk/wrk --script ./pipelined_get.lua --latency -d 5m -t 40 -c 760 http://server:8000 -- 32
Running 5m test @ http://server:8000
  40 threads and 760 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.16ms    6.70ms 223.68ms   93.20%
    Req/Sec   114.20k    28.04k  410.56k    72.96%
  Latency Distribution
     50%    2.82ms
     75%    4.91ms
     90%    8.58ms
     99%    0.00us
  1362072301 requests in 5.00m, 200.42GB read
Requests/sec: 4,538,752.08
Transfer/sec:    683.89MB

Max throughput in the experiment

../bin/wrk/wrk --script ./pipelined_get.lua --latency -d 5m -t 40 -c 760 http://server:8000 -- 32

Running 5m test @ http://server:8000
  40 threads and 760 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.04ms    5.38ms 389.00ms   92.75%
    Req/Sec   233.38k    48.72k  458.99k    86.19%
  Latency Distribution
     50%    1.26ms
     75%    1.96ms
     90%    4.09ms
     99%    0.00us
  2781077938 requests in 5.00m, 409.23GB read
Requests/sec: 9,267,161.41
Transfer/sec:      1.36GB

Latency distribution with coordinated omission

Latency distribution for 3.5 million requests/second in master

../bin/wrk2/wrk --script ./pipelined_get.lua --latency -d 10s -t 80 -c 512 -R 3500000 http://server:8000 -- 32
Running 10s test @ http://server:8000
  80 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   353.58ms  822.77ms   6.07s    89.58%
    Req/Sec       -nan      -nan   0.00      0.00%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    5.53ms
 75.000%  193.92ms
 90.000%    1.23s
 99.000%    4.11s
 99.900%    5.09s
 99.990%    5.71s
 99.999%    5.97s
100.000%    6.07s

----system---- ----total-cpu-usage---- ------memory-usage----- --io/total- -dsk/total- -net/total- ---system-- -pkt/total- ----tcp-sockets----
     time     |usr sys idl wai hiq siq| used  buff  cach  free| read  writ| read  writ| recv  send| int   csw |#recv #send|lis act syn tim clo
03-12 18:53:21| 32   4  62   0   0   2| 809M 95.3M 1203M  124G|   0     0 |   0     0 | 129M  432M| 306k  104k| 191k  341k|  3 482   0   0   0
03-12 18:53:22| 39   4  55   0   0   2| 907M 95.3M 1203M  124G|   0     0 |   0     0 | 150M  503M| 363k   97k| 218k  397k|  3 482   0   0   0
03-12 18:53:23| 36   4  58   0   0   2|1005M 95.3M 1203M  124G|   0     0 |   0     0 | 146M  491M| 346k   97k| 214k  387k|  3 482   0   0   0
03-12 18:53:24| 34   4  61   0   0   2|1097M 95.3M 1203M  124G|   0     0 |   0     0 | 143M  479M| 339k   93k| 209k  377k|  3 482   0   0   0
03-12 18:53:25| 42   4  52   0   0   2|1197M 95.3M 1203M  123G|   0     0 |   0     0 | 154M  517M| 361k  100k| 224k  408k|  3 482   0   0   0
03-12 18:53:26| 39   4  56   0   0   2|1298M 95.3M 1203M  123G|   0     0 |   0     0 | 151M  507M| 356k   95k| 221k  400k|  3 482   0   0   0
03-12 18:53:27| 42   4  52   0   0   2|1396M 95.3M 1203M  123G|   0     0 |   0     0 | 151M  506M| 361k   94k| 219k  399k|  3 482   0   0   0
03-12 18:53:28| 44   4  51   0   0   2|1496M 95.3M 1203M  123G|   0     0 |   0     0 | 153M  512M| 367k   99k| 223k  404k|  3 482   0   0   0
03-12 18:53:29| 34   4  60   0   0   2|1587M 95.3M 1203M  123G|   0     0 |   0     0 | 144M  481M| 355k   96k| 211k  379k|  3 482   0   0   0
03-12 18:53:30| 42   4  53   0   0   2|1689M 95.3M 1203M  123G|   0     0 |   0     0 | 151M  508M| 366k   91k| 221k  401k|  3 482   0   0   0

Latency distribution for 3.5 million requests/second in the experiment

../bin/wrk2/wrk --script ./pipelined_get.lua --latency -d 10s -t 80 -c 512 -R 3500000 http://server:8000 -- 32
Running 10s test @ http://server:8000
  80 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.82ms    3.00ms  64.86ms   94.17%
    Req/Sec       -nan      -nan   0.00      0.00%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.18ms
 75.000%    1.62ms
 90.000%    2.34ms
 99.000%   16.75ms
 99.900%   33.12ms
 99.990%   44.70ms
 99.999%   52.99ms
100.000%   64.89ms

----system---- ----total-cpu-usage---- ------memory-usage----- --io/total- -dsk/total- -net/total- ---system-- -pkt/total- ----tcp-sockets----
     time     |usr sys idl wai hiq siq| used  buff  cach  free| read  writ| read  writ| recv  send| int   csw |#recv #send|lis act syn tim clo
03-12 18:58:10|  0   0  99   0   0   0| 712M 95.3M 1203M  124G|0.05  0.22 | 684B   10k|   0     0 |5066  1368 |   0     0 |  3   2   0   0   0
03-12 18:58:11|  0   0 100   0   0   0| 711M 95.3M 1203M  124G|   0     0 |   0     0 |5538B 1588B| 425   975 |92.0  2.00 |  3   2   0   0   0
03-12 18:58:12|  5   2  92   0   0   1| 725M 95.3M 1203M  124G|   0     0 |   0     0 |  85M  285M| 220k  112k| 127k  226k|  3 482   0   0   1
03-12 18:58:13| 10   4  85   0   0   1| 724M 95.3M 1203M  124G|   0  2.00 |   0    32k| 166M  556M| 427k  173k| 243k  438k|  3 482   0   0   1
03-12 18:58:14| 10   4  85   0   0   1| 725M 95.3M 1203M  124G|   0     0 |   0     0 | 165M  555M| 435k  172k| 243k  438k|  3 482   0   0   1
03-12 18:58:15| 10   4  85   0   0   1| 723M 95.3M 1203M  124G|   0     0 |   0     0 | 166M  555M| 440k  172k| 243k  438k|  3 482   0   0   1
03-12 18:58:16| 10   4  86   0   0   1| 724M 95.3M 1203M  124G|   0     0 |   0     0 | 166M  555M| 415k  172k| 243k  438k|  3 482   0   0   1
03-12 18:58:17| 10   4  85   0   0   1| 723M 95.3M 1203M  124G|   0     0 |   0     0 | 165M  555M| 404k  172k| 243k  438k|  3 482   0   0   1
03-12 18:58:18| 10   4  85   0   0   1| 724M 95.3M 1203M  124G|   0  5.00 |   0    24k| 165M  555M| 404k  171k| 243k  438k|  3 482   0   0   1
03-12 18:58:19| 10   4  85   0   0   1| 724M 95.3M 1203M  124G|   0     0 |   0     0 | 166M  555M| 411k  171k| 243k  438k|  3 482   0   0   1
03-12 18:58:20| 10   4  85   0   0   1| 723M 95.3M 1203M  124G|   0     0 |   0     0 | 166M  555M| 412k  170k| 244k  438k|  3 482   0   0   1
03-12 18:58:21| 10   4  85   0   0   1| 722M 95.3M 1203M  124G|   0     0 |   0     0 | 166M  555M| 411k  171k| 244k  438k|  3 482   0   0   1
03-12 18:58:22|  5   2  93   0   0   1| 724M 95.3M 1203M  124G|   0     0 |   0     0 |  76M  256M| 190k   81k| 113k  202k|  3   2   0   0 391
@DamianEdwards
Copy link

Typo in the latency numbers for experiment? You say for 3.5 million but wrk param seems to be for 5 million.

@kellabyte
Copy link
Author

@DamianEdwards I actually ran the benchmark on 5 million by accident LOL. I just re-ran it and updated the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment