Skip to content

Instantly share code, notes, and snippets.

@franleplant
Created July 1, 2017 22:37
Show Gist options
  • Save franleplant/45b8a3c9f201c4724066a11dfcea6dff to your computer and use it in GitHub Desktop.
Save franleplant/45b8a3c9f201c4724066a11dfcea6dff to your computer and use it in GitHub Desktop.
server perf case study
# Control: Google
wrk -t12 -c400 -d30s http://google.com
Running 30s test @ http://google.com
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 519.53ms 204.18ms 2.00s 92.92%
Req/Sec 45.19 25.95 181.00 73.51%
14840 requests in 30.10s, 6.77MB read
Socket errors: connect 0, read 0, write 0, timeout 335
Requests/sec: 493.03
Transfer/sec: 230.46KB
# franleplant.com Oct 29th
wrk -t12 -c400 -d30s http://www.franleplant.com
Running 30s test @ http://www.franleplant.com
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 217.78ms 58.73ms 848.88ms 97.08%
Req/Sec 4.66 0.81 5.00 87.59%
137 requests in 30.09s, 202.29KB read
Requests/sec: 4.55
Transfer/sec: 6.72KB
# localhost (ubuntu) Oct 29th (compikled with debug)
wrk -t12 -c400 -d30s http://localhost:8000
Running 30s test @ http://localhost:8000
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 732.91us 1.07ms 36.13ms 95.98%
Req/Sec 3.43k 2.02k 6.25k 52.08%
204920 requests in 30.06s, 295.49MB read
Requests/sec: 6816.64
Transfer/sec: 9.83MB
NOTE: This uses about as much cpu as it can, about 90+ %
# container (ubuntu) Oct 29th (compiled with release)
wrk -t12 -c400 -d30s http://localhost:8000
Running 30s test @ http://localhost:8000
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 92.61us 80.11us 19.72ms 97.56%
Req/Sec 51.31k 2.25k 54.09k 93.33%
1531609 requests in 30.05s, 2.16GB read
Requests/sec: 50965.91
Transfer/sec: 73.49MB
NOTE: this is also uses about 90% CPU
# inside the server (16:51 UTC)
./wrk -t12 -c400 -d30s http://localhost
Running 30s test @ http://localhost
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 123.17us 754.86us 185.50ms 99.90%
Req/Sec 8.13k 1.43k 9.41k 91.97%
242210 requests in 30.05s, 349.26MB read
Requests/sec: 8059.31
Transfer/sec: 11.62MB
wrk2
=======================
# Control: Google
wrk2 -t2 -c100 -d30s -R2000 http://google.com
Running 30s test @ http://google.com
2 threads and 100 connections
Thread calibration: mean lat.: 5009.913ms, rate sampling interval: 17039ms
Thread calibration: mean lat.: 5041.426ms, rate sampling interval: 16941ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 18.23s 5.16s 27.33s 57.73%
Req/Sec 95.00 0.00 95.00 100.00%
5476 requests in 30.01s, 4.05MB read
Requests/sec: 182.45
Transfer/sec: 138.27KB
# local docker t12
wrk2 -t12 -c400 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
12 threads and 400 connections
Thread calibration: mean lat.: 0.880ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.888ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.925ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.968ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.817ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 0.94ms 383.26us 2.06ms 67.27%
Req/Sec 2.15 14.73 111.00 97.91%
760 requests in 30.18s, 1.10MB read
Socket errors: connect 0, read 0, write 0, timeout 5705
Requests/sec: 25.19
Transfer/sec: 37.19KB
# local docker t2
wrk2 -t2 -c100 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.103ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.820ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 775.64us 403.38us 3.55ms 65.22%
Req/Sec 51.91 77.18 333.00 85.12%
3002 requests in 30.00s, 4.33MB read
Socket errors: connect 0, read 0, write 0, timeout 1330
Requests/sec: 100.06
Transfer/sec: 147.75KB
wrk2 -t2 -c100 -d30s -R1000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 0.888ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.972ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 814.17us 356.08us 2.55ms 69.24%
Req/Sec 26.07 61.63 333.00 82.78%
1502 requests in 30.00s, 2.17MB read
Socket errors: connect 0, read 0, write 0, timeout 1330
Requests/sec: 50.07
Transfer/sec: 73.92KB
wrk2 -t2 -c100 -d30s -R100 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 3350.272ms, rate sampling interval: 13975ms
Thread calibration: mean lat.: 3292.473ms, rate sampling interval: 13983ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.70s 4.95s 18.97s 61.81%
Req/Sec 25.50 0.50 26.00 100.00%
1050 requests in 30.25s, 1.51MB read
Socket errors: connect 0, read 0, write 0, timeout 1355
Requests/sec: 34.71
Transfer/sec: 51.25KB
# franleplant.com (17:00 UTC)
wrk2 -t2 -c100 -d30s -R100 http://www.franleplant.com
Running 30s test @ http://www.franleplant.com
2 threads and 100 connections
Thread calibration: mean lat.: 1633.216ms, rate sampling interval: 6815ms
Thread calibration: mean lat.: 3629.371ms, rate sampling interval: 14409ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.15s 6.50s 25.20s 62.96%
Req/Sec 2.00 1.63 4.00 100.00%
106 requests in 30.25s, 156.52KB read
Socket errors: connect 0, read 0, write 0, timeout 1349
Requests/sec: 3.50
Transfer/sec: 5.17KB
# inside sever (17:05 UTC)
./wrk -t2 -c100 -d30s -R100 http://localhost
Running 30s test @ http://localhost
2 threads and 100 connections
Thread calibration: mean lat.: 3177.740ms, rate sampling interval: 14008ms
Thread calibration: mean lat.: 2639.267ms, rate sampling interval: 12017ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.63s 7.21s 29.00s 61.06%
Req/Sec 7.50 0.50 8.00 100.00%
390 requests in 30.25s, 575.86KB read
Socket errors: connect 0, read 0, write 0, timeout 1412
Requests/sec: 12.89
Transfer/sec: 19.03KB
nickel server to use 10 threads
======================
# Localhost
Before (duplicate)
wrk2 -t2 -c100 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.103ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 0.820ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 775.64us 403.38us 3.55ms 65.22%
Req/Sec 51.91 77.18 333.00 85.12%
3002 requests in 30.00s, 4.33MB read
Socket errors: connect 0, read 0, write 0, timeout 1330
Requests/sec: 100.06
Transfer/sec: 147.75KB
After
wrk2 -t2 -c100 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.300ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.259ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.21ms 435.69us 2.59ms 66.73%
Req/Sec 103.66 94.40 333.00 19.80%
6002 requests in 30.00s, 8.65MB read
Socket errors: connect 0, read 0, write 0, timeout 1260
Requests/sec: 200.05
Transfer/sec: 295.39KB
And with 30 threads
wrk2 -t2 -c100 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.266ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.258ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.24ms 437.41us 7.79ms 68.94%
Req/Sec 314.28 105.19 666.00 49.36%
17993 requests in 30.00s, 25.95MB read
Socket errors: connect 0, read 0, write 0, timeout 980
Requests/sec: 599.72
Transfer/sec: 0.86MB
NOTE on neither of those the CPU count spiked above 30%
And now with 400 connections
This is the sweetspot of failure
wrk2 -t2 -c400 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 400 connections
Thread calibration: mean lat.: 1.338ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.360ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.25ms 452.30us 3.22ms 65.47%
Req/Sec 77.91 100.53 333.00 66.73%
4502 requests in 30.00s, 6.49MB read
Socket errors: connect 0, read 0, write 0, timeout 5172
Requests/sec: 150.06
Transfer/sec: 221.57KB
the server hangs completely but the CPU usage is below 10%, why the fuck this is ahppening? bits me
And finally this is what happens if we use nickel with 100 threads in my i5
wrk2 -t2 -c400 -d30s -R2000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 400 connections
Thread calibration: mean lat.: 1.298ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.314ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.24ms 413.02us 3.99ms 67.17%
Req/Sec 261.41 102.12 666.00 72.88%
14982 requests in 30.00s, 21.60MB read
Socket errors: connect 0, read 0, write 0, timeout 4192
Requests/sec: 499.35
Transfer/sec: 737.32KB
This could be a more close to a real world traffic:
wrk2 -t2 -c100 -d30s -R1000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.300ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.292ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.30ms 456.19us 4.31ms 64.74%
Req/Sec 528.46 124.52 1.00k 54.37%
29922 requests in 30.00s, 43.15MB read
Requests/sec: 997.35
Transfer/sec: 1.44MB
NOTE: cpu usage: 15% WTF???
### Now this is this test in franleplant.com
Before
wrk2 -t2 -c100 -d30s -R2000 http://www.franleplant.com
Running 30s test @ http://www.franleplant.com
2 threads and 100 connections
Thread calibration: mean lat.: 4136.445ms, rate sampling interval: 14008ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 15.33s 4.31s 22.71s 58.33%
Req/Sec 0.00 0.09 4.00 99.95%
145 requests in 30.26s, 214.10KB read
Socket errors: connect 0, read 0, write 0, timeout 1358
Requests/sec: 4.79
Transfer/sec: 7.08KB
After
wrk2 -t2 -c100 -d30s -R1000 http://www.franleplant.com
Running 30s test @ http://www.franleplant.com
2 threads and 100 connections
Thread calibration: mean lat.: 3037.705ms, rate sampling interval: 10207ms
Thread calibration: mean lat.: 3021.235ms, rate sampling interval: 10149ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.84s 3.14s 16.92s 58.84%
Req/Sec 69.50 0.50 70.00 100.00%
4068 requests in 30.07s, 5.87MB read
Socket errors: connect 0, read 0, write 0, timeout 910
Requests/sec: 135.30
Transfer/sec: 199.78KB
The unofficial not proved conclusion here is: add more threads to the thread pool of nickel
Docker compose with nginx
==================
Before docker compose
wrk2 -t2 -c100 -d30s -R1000 http://localhost:8000
Running 30s test @ http://localhost:8000
2 threads and 100 connections
Thread calibration: mean lat.: 1.300ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.292ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.30ms 456.19us 4.31ms 64.74%
Req/Sec 528.46 124.52 1.00k 54.37%
29922 requests in 30.00s, 43.15MB read
Requests/sec: 997.35
Transfer/sec: 1.44MB
AFTER
wrk2 -t2 -c100 -d30s -R1000 http://localhost:8080
Running 30s test @ http://localhost:8080
2 threads and 100 connections
Thread calibration: mean lat.: 1.062ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 1.017ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 0.96ms 412.63us 4.23ms 64.45%
Req/Sec 526.05 116.76 0.90k 64.25%
29922 requests in 30.00s, 44.00MB read
Requests/sec: 997.32
Transfer/sec: 1.47MB
About the same beatch!!!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment