I decided to hook up the node.js C libuv async IO library that uses IOCP on Windows and epoll/kqueue/event ports/etc. on Unix systems to handle HTTP traffic to test if this is a suitable HTTP front end for Dazzle.
There is no V8 javascript engine so it's not running node.js, only the libuv async IO eventing and some C for handling the HTTP parsing.
- libuv is using default settings, zero tuning has been done.
- Client requests coming from a single remote client machine (Linux) over the LAN.
- Dazzle + libuv running on Windows 7 Core i7 930, 16GB of RAM.
- Dazzle returns "Hello world" in the HTTP response.
During the tests - CPU: 10% RAM: ~1.5MB
Wrk is a HTTP multi-threaded benchmark tool that supports HTTP pipelining.
wrk -d10 -t8 -c1024 --pipeline 64 http://192.168.0.201:8000/
Result: 291,346.23 requests per second with 0.0033% lost requests.
Running 10s test @ http://192.168.0.201:8000/
8 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 73.66ms 76.62ms 516.61ms 91.31%
Req/Sec 35.20k 12.70k 62.00k 57.53%
2915204 requests in 10.01s, 283.58MB read
Socket errors: connect 11, read 0, write 0, timeout 965
Requests/sec: 291,346.23
Transfer/sec: 28.34MB
wrk -d10 -t8 -c32 http://192.168.0.201:8000/
Result: 65,186.19 requests per second.
Running 10s test @ http://192.168.0.201:8000/
8 threads and 32 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 829.92us 1.65ms 14.94ms 94.91%
Req/Sec 8.04k 343.23 9.00k 90.19%
651841 requests in 10.00s, 63.41MB read
Requests/sec: 65,186.19
Transfer/sec: 6.34MB
If we crank the concurrent connections way up and increase threads Dazzle gains 10,000 more requests/second but now the load is affecting latency.
wrk -d10 -t16 -c512 http://192.168.0.201:8000/
Result: 75,045.61 requests per second.
Running 10s test @ http://192.168.0.201:8000/
16 threads and 512 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.38ms 8.61ms 61.37ms 86.81%
Req/Sec 4.16k 457.19 8.00k 79.24%
749530 requests in 9.99s, 72.91MB read
Requests/sec: 75,045.61
Transfer/sec: 7.30MB
ab -n 100000 -c 32 http://192.168.0.201:8000/
Result: 12,085.06 requests per second.
Server Hostname: 192.168.0.201
Server Port: 8000
Document Path: /
Document Length: 13 bytes
Concurrency Level: 64
Time taken for tests: 8.275 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 7800000 bytes
HTML transferred: 1300000 bytes
Requests per second: 12,085.06 [#/sec] (mean)
Time per request: 5.296 [ms] (mean)
Time per request: 0.083 [ms] (mean, across all concurrent requests)
Transfer rate: 920.54 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 0.9 2 16
Processing: 1 3 0.8 3 12
Waiting: 1 2 0.7 2 11
Total: 3 5 1.4 6 26
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 6
95% 7
98% 7
99% 8
100% 26 (longest request)
ab -n 100000 -c 32 -k http://192.168.0.201:8000/
Result: 33,314.89 requests per second.
Server Hostname: 192.168.0.201
Server Port: 8000
Document Path: /
Document Length: 13 bytes
Concurrency Level: 32
Time taken for tests: 3.002 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 100000
Total transferred: 10200000 bytes
HTML transferred: 1300000 bytes
Requests per second: 33,314.89 [#/sec] (mean)
Time per request: 0.961 [ms] (mean)
Time per request: 0.030 [ms] (mean, across all concurrent requests)
Transfer rate: 3318.48 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 2
Processing: 0 1 0.3 1 5
Waiting: 0 1 0.3 1 5
Total: 0 1 0.3 1 5
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 2
99% 2
100% 5 (longest request)
awesome!