Skip to content

Instantly share code, notes, and snippets.

@oschaaf
Last active April 16, 2019 15:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save oschaaf/a04a1d4a9fa3b571ad80ba2449761616 to your computer and use it in GitHub Desktop.
Save oschaaf/a04a1d4a9fa3b571ad80ba2449761616 to your computer and use it in GitHub Desktop.
reproduction

Fortio

taskset -c 0 fortio load -qps 1000 -c 100 -t 60s http://127.0.0.1 &&\
    taskset -c 0 fortio load -qps 1000 -c 100 -t 60s http://127.0.0.1:10000

Direct

Fortio 1.3.1-pre running at 1000 queries per second, 1->1 procs, for 1m0s: http://127.0.0.1
14:23:36 I httprunner.go:82> Starting http test for http://127.0.0.1 with 100 threads at 1000.0 qps
...
Ended after 1m0.005159802s : 60000 calls. qps=999.91
Sleep times : count 59900 avg 0.097248199 +/- 0.001035 min 0.094435368 max 0.099896628 sum 5825.1671
Aggregated Function Time : count 60000 avg 0.001968588 +/- 0.0006135 min 9.6326e-05 max 0.004023557 sum 118.115282
# range, mid point, percentile, count
>= 9.6326e-05 <= 0.001 , 0.000548163 , 0.07, 44
> 0.001 <= 0.002 , 0.0015 , 59.71, 35781
> 0.002 <= 0.003 , 0.0025 , 91.98, 19364
> 0.003 <= 0.004 , 0.0035 , 100.00, 4809
> 0.004 <= 0.00402356 , 0.00401178 , 100.00, 2
# target 50% 0.0018372
# target 75% 0.00247382
# target 90% 0.0029386
# target 99% 0.00387565
# target 99.9% 0.00398794
Sockets used: 100 (for perfect keepalive, would be 100)
Code 200 : 60000 (100.0 %)
Response Header Sizes : count 60000 avg 248 +/- 0 min 248 max 248 sum 14880000
Response Body/Total Sizes : count 60000 avg 3697 +/- 0 min 3697 max 3697 sum 221820000
All done 60000 calls (plus 100 warmup) 1.969 ms avg, 999.9 qps

Via Envoy

Fortio 1.3.1-pre running at 1000 queries per second, 1->1 procs, for 1m0s: http://127.0.0.1:10000
14:24:36 I httprunner.go:82> Starting http test for http://127.0.0.1:10000 with 100 threads at 1000.0 qps
Starting at 1000 qps with 100 thread(s) [gomax 1] for 1m0s : 600 calls each (total 60000)
....
Ended after 1m0.010321697s : 60000 calls. qps=999.83
Sleep times : count 59900 avg 0.093223699 +/- 0.002265 min 0.085102269 max 0.099675955 sum 5584.09954
Aggregated Function Time : count 60000 avg 0.0059709658 +/- 0.002233 min 0.000198795 max 0.014066287 sum 358.257951
# range, mid point, percentile, count
>= 0.000198795 <= 0.001 , 0.000599398 , 2.20, 1321
> 0.001 <= 0.002 , 0.0015 , 3.95, 1049
> 0.002 <= 0.003 , 0.0025 , 7.83, 2331
> 0.003 <= 0.004 , 0.0035 , 16.32, 5094
> 0.004 <= 0.005 , 0.0045 , 34.27, 10768
> 0.005 <= 0.006 , 0.0055 , 54.01, 11840
> 0.006 <= 0.007 , 0.0065 , 71.09, 10250
> 0.007 <= 0.008 , 0.0075 , 78.65, 4539
> 0.008 <= 0.009 , 0.0085 , 89.23, 6345
> 0.009 <= 0.01 , 0.0095 , 96.70, 4484
> 0.01 <= 0.011 , 0.0105 , 99.09, 1432
> 0.011 <= 0.012 , 0.0115 , 99.79, 419
> 0.012 <= 0.014 , 0.013 , 100.00, 127
> 0.014 <= 0.0140663 , 0.0140331 , 100.00, 1
# target 50% 0.00579704
# target 75% 0.00751707
# target 90% 0.00910326
# target 99% 0.010963
# target 99.9% 0.0130709
Sockets used: 100 (for perfect keepalive, would be 100)
Code 200 : 60000 (100.0 %)
Response Header Sizes : count 60000 avg 242.00073 +/- 0.02707 min 242 max 243 sum 14520044
Response Body/Total Sizes : count 60000 avg 3691.0007 +/- 0.02707 min 3691 max 3692 sum 221460044
All done 60000 calls (plus 100 warmup) 5.971 ms avg, 999.8 qps

Via HAProxy

Fortio 1.3.1-pre running at 1000 queries per second, 1->1 procs, for 1m0s: http://127.0.0.1:10003
16:28:37 I httprunner.go:82> Starting http test for http://127.0.0.1:10003 with 100 threads at 1000.0 qps
Starting at 1000 qps with 100 thread(s) [gomax 1] for 1m0s : 600 calls each (total 60000)
...
Ended after 1m0.005035708s : 60000 calls. qps=999.92
Sleep times : count 59900 avg 0.09714769 +/- 0.0009857 min 0.094209112 max 0.099830811 sum 5819.14665
Aggregated Function Time : count 60000 avg 0.0020863107 +/- 0.0005482 min 0.000220283 max 0.003909951 sum 125.17864
# range, mid point, percentile, count
>= 0.000220283 <= 0.001 , 0.000610141 , 0.07, 39
> 0.001 <= 0.002 , 0.0015 , 59.54, 35683
> 0.002 <= 0.003 , 0.0025 , 90.41, 18525
> 0.003 <= 0.00390995 , 0.00345498 , 100.00, 5753
# target 50% 0.00183964
# target 75% 0.00250084
# target 90% 0.00298667
# target 99% 0.00381505
# target 99.9% 0.00390046
Sockets used: 100 (for perfect keepalive, would be 100)
Code 200 : 60000 (100.0 %)
Response Header Sizes : count 60000 avg 224 +/- 0 min 224 max 224 sum 13440000
Response Body/Total Sizes : count 60000 avg 3673 +/- 0 min 3673 max 3673 sum 220380000
All done 60000 calls (plus 100 warmup) 2.086 ms avg, 999.9 qps

global
daemon
maxconn 10000
nbproc 1
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
http-reuse aggressive
frontend test
bind *:10003
acl test1 path_beg /
use_backend test_backend if test1
backend test_backend
server server1 127.0.0.1:80

Summary

Test description

To reproduce I tested with wrk2, Nighthawk, and Fortio in similar command-line configurations directly against the test server (nginx server serving static content) and via Envoy & HAProxy. Second, we run them via Envoy proxying to the same nginx server. We use a single thread/process for both client and server to simplify reasoning. We use taskset to confine and isolate processes to a single cpu code for improved accuracy (and simplified reasoning). Wrk2 and Fortio seem to have trouble delivering the required sub-ms accuracy, so I feel it is worth giving Nighthawk a spin.

Preliminary conclusion

In the earlier tests performed in this issue, it may be worth following benchmark best-practices and repeat the tests [1]. It's surely worth giving nighthawk a spin as it's different processing model and way of ensuring precise request-release timings helps producing accurate sub-ms measurements. Also, it may be a good idea to use nginx or Envoy's direct-response capability to git rid of benchmark noise caused by garbage-collection. For best interpretation, it would also be good to know what precisely gets measured: are we including or excluding test-connection setup timings in wrk2/fortio?

[1] https://github.com/envoyproxy/nighthawk/#accuracy-and-repeatability-considerations-when-using-the-nighthawk-client

Simple baseline per benchmark client (mean)

nighthawk-direct:    0.000090076s
wrk2-direct:         0.000786250s
fortio-direct:       0.001969000s

nighthawk-via-envoy: 0.000198846s
wrk2-via-envoy:      0.003330000s
fortio-via-envoy:    0.005971000s

nighthawk-via-haprx: 0.000141260s 
wrk2-via-haproxy:    0.001390000s
fortio-via-haproxy:  0.002086000s  

thoughts

First it may be a good idea to use wrk2 instead of wrk as it claims enhanced accuracy. Furthermore, since in test we'll be observing sub-millisecond response times, it is worth noting the following comment from wrk's readme:

It is important to note that in wrk2's current constant-throughput implementation, measured latencies are [only] accurate to a +/- ~1 msec granularity, due to OS sleep time behavior.

It's also worth noting that running nighthawk through HAProxy results in 10's of occurences of:

[16:03:40.663853][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C67] premature response

Fortio and wrk2 have been evaluated earlier, see https://docs.google.com/document/d/10xeQuEjUjdmfFq36kGrxI6v78GKHspqn_Nb-pohKfOo/edit?usp=sharing

Test origin is nginx serving lorem-ipsum static content.

➜ curl -v 127.0.0.1
* Rebuilt URL to: 127.0.0.1/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.14.0 (Ubuntu)
< Date: Tue, 16 Apr 2019 13:40:24 GMT
< Content-Type: text/html
< Content-Length: 3449
< Last-Modified: Mon, 01 Apr 2019 11:00:16 GMT
< Connection: keep-alive
< ETag: "5ca1ef40-d79"
< Accept-Ranges: bytes
< 
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam mauris felis, egestas eget turpis nec, ullamcorper laoreet magna. Donec ac condimentum lacus, nec semper eros. Sed iaculis arcu vitae egestas viverra. Nulla tempor, neque tempus tincidunt fermentum, orci nunc sagittis nisl, sed dapibus nunc ex sit amet justo. Ut porta pellentesque mi quis lobortis. Integer luctus, diam et mattis rhoncus, lacus orci condimentum tortor, vitae venenatis ante odio non massa. Duis ut nulla consectetur, elementum enim eu, maximus lacus. Ut id consequat libero. Mauris eget lorem et lorem iaculis laoreet a nec augue. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Maecenas ac cursus eros, ut eleifend lacus. Nam sit amet mauris nec mi luctus posuere. Phasellus ullamcorper vulputate purus sit amet dapibus. Mauris sit amet magna risus.

Sed venenatis nulla non massa tempus consectetur. In eu suscipit mi, auctor faucibus augue. Phasellus blandit sagittis urna sed semper. Maecenas sem purus, laoreet gravida pretium non, malesuada vitae felis. Nam laoreet nisi non ipsum tincidunt facilisis. Donec ultrices a elit vel aliquam. Duis et diam eu urna ultrices dictum. Etiam non nulla eu velit feugiat ultrices ac vitae orci. In id posuere magna, vitae vulputate lectus. Vestibulum consectetur luctus neque ut cursus. Aliquam vel dapibus sem, vel rhoncus elit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. In consequat ipsum arcu, eget ultricies tellus finibus id.

Nam scelerisque viverra fermentum. Vivamus vitae tincidunt mauris. Cras id pretium lectus. Nunc ut leo vitae ligula dictum pretium. Proin et laoreet massa, sed pharetra ex. Nam nec pellentesque magna. Quisque lectus metus, ultrices eget nunc ac, blandit malesuada nulla. Nullam justo elit, eleifend eget elementum nec, convallis eu massa. Curabitur rhoncus pretium lorem et commodo. Morbi tincidunt lectus ut sodales pellentesque. Ut varius purus eget nunc ultricies congue.

Aliquam posuere blandit mollis. Integer quis sollicitudin mi. Integer ac lobortis felis. Maecenas a molestie libero, vitae rhoncus lacus. Phasellus est nunc, faucibus facilisis velit in, lobortis faucibus neque. Sed varius faucibus tristique. Sed maximus libero justo, sit amet laoreet orci feugiat eget. Pellentesque aliquet enim ut facilisis vestibulum. In lacinia malesuada quam, vitae aliquet arcu pretium eu. Aliquam cursus facilisis feugiat. Fusce eu orci ornare, tempus purus ac, commodo leo. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Sed tempus elit eget pretium volutpat. Sed tincidunt dapibus tortor at blandit.

Suspendisse vitae cursus elit. Sed pretium leo diam, ac semper nunc faucibus id. Sed pharetra, magna facilisis iaculis efficitur, nisl tortor vestibulum metus, id ultricies turpis arcu et odio. Suspendisse fringilla semper tincidunt. Cras at justo congue orci sodales efficitur. Donec quis sem ut dui efficitur faucibus. Pellentesque dapibus lacinia elit, sit amet volutpat velit gravida lobortis. Sed at purus eros. Pellentesque sodales, nulla at tincidunt placerat, massa metus facilisis nibh, non posuere ipsum metus a nisl. Duis nibh urna, laoreet quis interdum sed, tempor vel risus. Fusce tincidunt felis quis tincidunt luctus. Mauris vehicula ipsum magna, sed placerat ligula feugiat in. Curabitur pretium arcu magna, nec iaculis massa fermentum a. 
* Connection #0 to host 127.0.0.1 left intact

envoy config

{"static_resources": {
  "listeners": [
    {
      "address": { "socket_address": { "address": "0.0.0.0", "port_value": 10000 } },
      "filter_chains": {
        "filters": [
          {
            "name": "envoy.http_connection_manager",
            "config": {
              "stat_prefix": "ingress_http",
              "http_filters": [{
                "name": "envoy.router",
                "config": {}
              }],
              "route_config": {
                "virtual_hosts": [
                  {
                    "name": "fortio",
                    "domains": "*",
                    "routes": [
                      {
                        "route": {
                          "cluster": "fortio"
                        },
                        "match": {
                          "prefix": "/"
    }}]}]}}}]}}],
  "clusters":[
    {
      "name": "fortio",
      "type": "STRICT_DNS",
      "connect_timeout": "1s",
      "hosts": [
        { "socket_address": { "address": "127.0.0.1", "port_value": 80 } }
      ]
    }
  ]
}}

Nighthawk

taskset -c 0 bazel-bin/nighthawk_client --prefetch-connections --duration 60 --connections 100 -v info --rps 1000  http://127.0.0.1 && \
   taskset -c 0 bazel-bin/nighthawk_client --duration 60 --prefetch-connections --connections 100 -v info --rps 1000  http://127.0.0.1:10000

Direct

[14:02:56.731050][19704][I] [source/client/client.cc:74] Starting 1 threads / event loops. Test duration: 60 seconds.
[14:02:56.731092][19704][I] [source/client/client.cc:76] Global targets: 100 connections and 1000 calls per second.
Nighthawk - A layer 7 protocol benchmarking tool.

benchmark_http_client.queue_to_connect: 59999 samples, mean: 0.000002285s, pstdev: 0.000000109s
Percentile  Count       Latency        
0           2           0.000002021s   
0.5         30273       0.000002278s   
0.75        45153       0.000002334s   
0.8         48115       0.000002348s   
0.9         54080       0.000002386s   
0.95        57024       0.000002420s   
0.990625    59438       0.000002621s   
0.999023    59942       0.000003342s   
1           59999       0.000008526s   

benchmark_http_client.request_to_response: 59999 samples, mean: 0.000085799s, pstdev: 0.000007036s
Percentile  Count       Latency        
0           1           0.000081947s   
0.5         30003       0.000084915s   
0.75        45001       0.000087847s   
0.8         48021       0.000088291s   
0.9         54005       0.000088839s   
0.95        57004       0.000089239s   
0.990625    59439       0.000090619s   
0.999023    59941       0.000094751s   
1           59999       0.001277631s   

sequencer.callback: 59999 samples, mean: 0.000090076s, pstdev: 0.000007098s
Percentile  Count       Latency        
0           1           0.000086167s   
0.5         30023       0.000089151s   
0.75        45009       0.000092271s   
0.8         48000       0.000092715s   
0.9         54032       0.000093279s   
0.95        57004       0.000093691s   
0.990625    59437       0.000095235s   
0.999023    59941       0.000099451s   
1           59999       0.001282303s   

Counter                                 Value       Per second
client.benchmark.http_2xx               60000       1000.00
client.upstream_cx_http1_total          100         1.67
client.upstream_cx_overflow             1           0.02
client.upstream_cx_rx_bytes_total       221820000   3697000.00
client.upstream_cx_total                100         1.67
client.upstream_cx_tx_bytes_total       3420000     57000.00
client.upstream_rq_pending_total        1           0.02
client.upstream_rq_total                60000       1000.00

Via Envoy

[14:03:58.805360][19704][I] [source/client/client.cc:278] Done.
[14:03:58.821851][19719][I] [source/client/client.cc:74] Starting 1 threads / event loops. Test duration: 60 seconds.
[14:03:58.821893][19719][I] [source/client/client.cc:76] Global targets: 100 connections and 1000 calls per second.
Nighthawk - A layer 7 protocol benchmarking tool.

benchmark_http_client.queue_to_connect: 59998 samples, mean: 0.000002198s, pstdev: 0.000000107s
Percentile  Count       Latency        
0           1           0.000001925s   
0.5         30091       0.000002188s   
0.75        45112       0.000002246s   
0.8         48036       0.000002261s   
0.9         54111       0.000002302s   
0.95        57004       0.000002340s   
0.990625    59438       0.000002450s   
0.999023    59940       0.000003253s   
1           59998       0.000009693s   

benchmark_http_client.request_to_response: 59998 samples, mean: 0.000194668s, pstdev: 0.000008875s
Percentile  Count       Latency        
0           1           0.000189543s   
0.5         30102       0.000193167s   
0.75        45011       0.000197119s   
0.8         48027       0.000197567s   
0.9         54014       0.000198399s   
0.95        57011       0.000199143s   
0.990625    59437       0.000204623s   
0.999023    59940       0.000243175s   
1           59998       0.001125375s   

sequencer.callback: 59998 samples, mean: 0.000198846s, pstdev: 0.000009030s
Percentile  Count       Latency        
0           1           0.000193615s   
0.5         30041       0.000197303s   
0.75        45005       0.000201407s   
0.8         48007       0.000201855s   
0.9         54001       0.000202695s   
0.95        57017       0.000203463s   
0.990625    59436       0.000208975s   
0.999023    59940       0.000247343s   
1           59998       0.001153471s   

Counter                                 Value       Per second
client.benchmark.http_2xx               59999       999.98
client.upstream_cx_http1_total          100         1.67
client.upstream_cx_overflow             1           0.02
client.upstream_cx_rx_bytes_total       221456309   3690938.48
client.upstream_cx_total                100         1.67
client.upstream_cx_tx_bytes_total       3599940     59999.00
client.upstream_rq_pending_total        1           0.02
client.upstream_rq_total                59999       999.98

Via HAProxy

[16:02:50.633097][6746][I] [source/client/client.cc:74] Starting 1 threads / event loops. Test duration: 60 seconds.
[16:02:50.633133][6746][I] [source/client/client.cc:76] Global targets: 100 connections and 1000 calls per second.
[16:03:40.639401][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C1] premature response
[16:03:40.639516][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C18] premature response
[16:03:40.639584][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C17] premature response
[16:03:40.639654][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C16] premature response
[16:03:40.639716][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C15] premature response
[16:03:40.639787][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C14] premature response
[16:03:40.639848][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C13] premature response
[16:03:40.639923][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C12] premature response
[16:03:40.639985][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C11] premature response
[16:03:40.640051][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C10] premature response
[16:03:40.640112][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C9] premature response
[16:03:40.640178][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C8] premature response
[16:03:40.640284][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C7] premature response
[16:03:40.640347][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C6] premature response
[16:03:40.640407][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C5] premature response
[16:03:40.640466][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C4] premature response
[16:03:40.640525][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C3] premature response
[16:03:40.640584][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C2] premature response
[16:03:40.640808][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C19] premature response
[16:03:40.640887][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C37] premature response
[16:03:40.640954][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C36] premature response
[16:03:40.641014][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C35] premature response
[16:03:40.641073][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C34] premature response
[16:03:40.641132][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C33] premature response
[16:03:40.641193][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C32] premature response
[16:03:40.641278][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C31] premature response
[16:03:40.641340][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C30] premature response
[16:03:40.641400][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C29] premature response
[16:03:40.641459][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C28] premature response
[16:03:40.641517][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C27] premature response
[16:03:40.641576][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C26] premature response
[16:03:40.641636][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C25] premature response
[16:03:40.641694][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C24] premature response
[16:03:40.641753][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C23] premature response
[16:03:40.641813][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C22] premature response
[16:03:40.641873][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C21] premature response
[16:03:40.641931][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C20] premature response
[16:03:40.642014][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C38] premature response
[16:03:40.642077][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C52] premature response
[16:03:40.642144][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C51] premature response
[16:03:40.642205][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C50] premature response
[16:03:40.642271][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C49] premature response
[16:03:40.642331][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C48] premature response
[16:03:40.642400][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C47] premature response
[16:03:40.642461][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C46] premature response
[16:03:40.642526][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C45] premature response
[16:03:40.642586][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C44] premature response
[16:03:40.642750][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C43] premature response
[16:03:40.642815][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C42] premature response
[16:03:40.642875][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C41] premature response
[16:03:40.642934][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C40] premature response
[16:03:40.642992][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C39] premature response
[16:03:40.662242][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C53] premature response
[16:03:40.662318][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C56] premature response
[16:03:40.662381][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C55] premature response
[16:03:40.662442][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C54] premature response
[16:03:40.663278][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C57] premature response
[16:03:40.663353][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C75] premature response
[16:03:40.663416][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C74] premature response
[16:03:40.663475][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C73] premature response
[16:03:40.663535][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C72] premature response
[16:03:40.663594][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C71] premature response
[16:03:40.663674][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C70] premature response
[16:03:40.663735][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C69] premature response
[16:03:40.663794][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C68] premature response
[16:03:40.663853][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C67] premature response
[16:03:40.663911][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C66] premature response
[16:03:40.663969][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C65] premature response
[16:03:40.664028][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C64] premature response
[16:03:40.664086][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C63] premature response
[16:03:40.664146][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C62] premature response
[16:03:40.664205][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C61] premature response
[16:03:40.664264][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C60] premature response
[16:03:40.664322][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C59] premature response
[16:03:40.664380][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C58] premature response
[16:03:40.664596][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C76] premature response
[16:03:40.664670][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C95] premature response
[16:03:40.664733][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C94] premature response
[16:03:40.664793][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C93] premature response
[16:03:40.664852][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C92] premature response
[16:03:40.664911][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C91] premature response
[16:03:40.664991][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C90] premature response
[16:03:40.665052][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C89] premature response
[16:03:40.665111][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C88] premature response
[16:03:40.665169][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C87] premature response
[16:03:40.665228][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C86] premature response
[16:03:40.665286][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C85] premature response
[16:03:40.665345][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C84] premature response
[16:03:40.665404][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C83] premature response
[16:03:40.665462][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C82] premature response
[16:03:40.665522][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C81] premature response
[16:03:40.665580][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C80] premature response
[16:03:40.665639][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C79] premature response
[16:03:40.665697][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C78] premature response
[16:03:40.665755][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C77] premature response
[16:03:40.665928][6747][I] [external/envoy/source/common/http/codec_client.cc:122] [C96] premature response
Nighthawk - A layer 7 protocol benchmarking tool.

benchmark_http_client.queue_to_connect: 59997 samples, mean: 0.000002676s, pstdev: 0.000000154s
Percentile  Count       Latency        
0           1           0.000001633s   
0.5         30023       0.000002679s   
0.75        45045       0.000002738s   
0.8         48157       0.000002757s   
0.9         54057       0.000002816s   
0.95        57001       0.000002877s   
0.990625    59437       0.000003024s   
0.999023    59939       0.000003588s   
1           59997       0.000007880s   

benchmark_http_client.request_to_response: 59997 samples, mean: 0.000136163s, pstdev: 0.000011002s
Percentile  Count       Latency        
0           1           0.000121635s   
0.5         30058       0.000135655s   
0.75        45017       0.000137727s   
0.8         48020       0.000138279s   
0.9         54029       0.000139223s   
0.95        57011       0.000140111s   
0.990625    59435       0.000144743s   
0.999023    59939       0.000163143s   
1           59997       0.001483519s   

sequencer.callback: 59997 samples, mean: 0.000141260s, pstdev: 0.000011084s
Percentile  Count       Latency        
0           1           0.000125579s   
0.5         30023       0.000140727s   
0.75        45009       0.000142959s   
0.8         48015       0.000143527s   
0.9         53998       0.000144543s   
0.95        57000       0.000145487s   
0.990625    59435       0.000149999s   
0.999023    59939       0.000169023s   
1           59997       0.001488703s   

Counter                                 Value       Per second
client.benchmark.http_2xx               59998       999.97
client.upstream_cx_http1_total          100         1.67
client.upstream_cx_overflow             1           0.02
client.upstream_cx_rx_bytes_total       220393006   3673216.77
client.upstream_cx_total                100         1.67
client.upstream_cx_tx_bytes_total       3599880     59998.00
client.upstream_rq_pending_total        1           0.02
client.upstream_rq_total                59998       999.97

[16:03:52.707620][6746][I] [source/client/client.cc:278] Done.

Wrk2

taskset -c 0 wrk2 -t 1 -c 100 -d 60s --rate 1000 http://127.0.0.1/ && \
    taskset -c 0 wrk2 -t 1 -c 100 -d 60s --rate 1000 http://127.0.0.1:10000/

Direct

Running 1m test @ http://127.0.0.1/
  1 threads and 100 connections
  Thread calibration: mean lat.: 0.816ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   786.25us  409.48us   5.64ms   63.60%
    Req/Sec     1.08k   131.83     1.80k    73.44%
  59801 requests in 1.00m, 210.84MB read
Requests/sec:    996.68
Transfer/sec:      3.51MB

Via Envoy

Running 1m test @ http://127.0.0.1:10000/
  1 threads and 100 connections
  Thread calibration: mean lat.: 3.519ms, rate sampling interval: 12ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.33ms    1.60ms   8.84ms   66.44%
    Req/Sec     1.04k     1.69k    5.55k    78.27%
  59001 requests in 1.00m, 207.68MB read
Requests/sec:    983.31
Transfer/sec:      3.46MB

Via HAProxy

Running 1m test @ http://127.0.0.1:10003/
  1 threads and 100 connections
  Thread calibration: mean lat.: 1.771ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.39ms  559.93us   3.99ms   66.78%
    Req/Sec     1.05k     2.03k    6.78k    80.68%
  59002 requests in 1.00m, 206.67MB read
Requests/sec:    983.34
Transfer/sec:      3.44MB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment