Skip to content

Instantly share code, notes, and snippets.

@markpapadakis
Last active August 29, 2015 14:11
Show Gist options
  • Save markpapadakis/dee39f95a404edfb8d6c to your computer and use it in GitHub Desktop.
Save markpapadakis/dee39f95a404edfb8d6c to your computer and use it in GitHub Desktop.
- responding with a file that contains 'Hello World'
- Using latest releases of all benchmarked web servers
- selected because of claims/benchmarks made wrt to their performance/speed
- HTTPSrv is using the same configuration (number of threads, responds with a similar file)
- tried differnet configs. for lhttd but still really slow
- Using a 12 cores node, at 2Ghz, 16GB of RAM (rougly x2 slower than system used in test in that page)
- Except nginx, other web servers tested with default configuration(nginx with default settings ran slower)
- Clearly, g-web claims are valid. Faster by a wide margin than all other HTTP servers in this simple test case
(except our optimized HTTPSrv, which, though, is built to support a minimal features-set
and its only real use here is serving static files, and, optionally, resizing images before
responding to the client)
# nginx
(with configuration used verbatim from: http://lowlatencyweb.wordpress.com/2012/03/20/500000-requestssec-modern-http-servers-are-fast )
./wrk -t 10 -c 1000 -R 10m http://localhost:82/index.html
Running 10s test @ http://localhost:82/index.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.85s 2.73s 9.81s 57.80%
Req/Sec -nan -nan 0.00 0.00%
1798634 requests in 10.00s, 437.32MB read
Requests/sec: 179895.37
# nginx (default configuration)
./wrk -t 10 -c 1000 -R 10m http://localhost:82/index.html
Running 10s test @ http://127.0.0.1:82/index.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.87s 2.86s 9.93s 55.86%
Req/Sec -nan -nan 0.00 0.00%
700801 requests in 10.00s, 573.40MB read
Socket errors: connect 0, read 0, write 0, timeout 94 # timeouts
Requests/sec: 70099.01
# LighTTPD
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:83/hello.html
Running 10s test @ http://127.0.0.1:83/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.72s 2.85s 9.96s 57.02%
Req/Sec -nan -nan 0.00 0.00%
257772 requests in 10.00s, 43.29MB read
Requests/sec: 25780.39
# monkey
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:2001/hello.html
Running 10s test @ http://127.0.0.1:2001/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.58s 2.79s 9.84s 57.95%
Req/Sec -nan -nan 0.00 0.00%
1609880 requests in 10.00s, 277.91MB read
Socket errors: connect 0, read 0, write 0, timeout 867 #timeouts
Requests/sec: 160908.69
# g-wan
./wrk -t 10 -c 1000 -R 10m http://localhost:8080/100.html
Running 10s test @ http://localhost:8080/100.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.47s 2.77s 9.68s 58.05%
Req/Sec -nan -nan 0.00 0.00%
3164990 requests in 10.00s, 793.83MB read
Requests/sec: 316541.64
# nxweb
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:8055/hello.html
Running 10s test @ http://127.0.0.1:8055/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.58s 2.78s 9.72s 57.74%
Req/Sec -nan -nan 0.00 0.00%
2688534 requests in 10.00s, 571.77MB read
Socket errors: connect 0, read 0, write 0, timeout 48
Requests/sec: 268928.24
# thttpd
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:88/hello.html
Running 10s test @ http://127.0.0.1:88/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.72s 2.89s 9.97s 57.89%
Req/Sec -nan -nan 0.00 0.00%
156669 requests in 10.00s, 36.76MB read
Requests/sec: 15670.66
# Apache
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1/index.html
Running 10s test @ http://127.0.0.1/index.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.82s 2.82s 9.90s 58.10%
Req/Sec -nan -nan 0.00 0.00%
620793 requests in 10.00s, 140.42MB read
Socket errors: connect 0, read 16, write 0, timeout 1711 # timeouts and errors
Requests/sec: 62103.30
# YAWS
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:89/hello.html
Running 10s test @ http://127.0.0.1:89/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.83s 2.82s 9.90s 58.07%
Req/Sec -nan -nan 0.00 0.00%
274839 requests in 10.00s, 52.16MB read
Socket errors: connect 0, read 0, write 0, timeout 2505 # timeouts?
Requests/sec: 27488.43
# lwan (https://github.com/lpereira/lwan)
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:86/hello
Running 10s test @ http://127.0.0.1:86/hello
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.65s 2.85s 9.94s 57.73%
Req/Sec -nan -nan 0.00 0.00%
430443 requests in 10.00s, 97.29MB read
Requests/sec: 43062.06
# h2o (https://github.com/h2o/h2o)
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:87/hello.html
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.68s 2.81s 9.78s 57.63%
Req/Sec -nan -nan 0.00 0.00%
2025161 requests in 10.00s, 424.90MB read # lots of read errors, fixed by further tempering w/ configuration
Requests/sec: 202540.34
Transfer/sec: 42.49MB
# Link ( our s/w load balancer, new release soon sometime soon )
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:8090/hello.html
Running 10s test @ http://127.0.0.1:8090/hello.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.66s 2.79s 9.74s 57.66%
Req/Sec -nan -nan 0.00 0.00%
2587261 requests in 10.00s, 298.56MB read
Requests/sec: 258741.37
# HTTPSrv (our simple HTTP server)
./wrk -t 10 -c 1000 -R 10m http://127.0.0.1:1027/bpassets/15121_SX256.jpg
Running 10s test @ http://127.0.0.1:1027/bpassets/15121_SX256.jpg
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.55s 2.77s 9.67s 57.60%
Req/Sec -nan -nan 0.00 0.00%
3251797 requests in 10.00s, 365.94MB read
Requests/sec: 325250.47
@lpereira
Copy link

Lwan results are quite weird. Was it built it in Debug (default, passing no parameters to CMake) or Release mode (by passing -DCMAKE_BUILD_TYPE=Release to CMake)?

@edsiper
Copy link

edsiper commented Jan 2, 2015

Note: gwan is caching every single response. You can test using a file with a higher size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment