Skip to content

Instantly share code, notes, and snippets.

@membphis
Last active March 13, 2024 10:57
Show Gist options
  • Star 11 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Apache apisix benchmark script: https://github.com/iresty/apisix/blob/master/benchmark/run.sh
Kong beanchmark script:
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=example-service' \
--data 'host=127.0.0.1'
curl -i -X POST \
--url http://localhost:8001/services/example-service/routes \
--data 'paths[]=/hello'
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=rate-limiting" \
--data "config.hour=999999999999" \
--data "config.policy=local"
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=prometheus"
curl -i http://127.0.0.1:8000/hello/hello
wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
@cboitel
Copy link

cboitel commented Jun 24, 2020

Will post tomorrow similar thing for APISIX 1.3.0 installed with RPM as well (will adapt my scripts/...).

Hope it helps.

@membphis
Copy link
Author

Here is my result:

platform: aliyun cloud, 8 vCPU 32 GiB ecs.hfg5.2xlarge
apisix version: apache/apisix@492fa71

# 1 worker

apisix: 1 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":5,"key":"\/apisix\/routes\/1","modifiedIndex":5},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   692.12us  117.12us   4.72ms   89.93%
    Req/Sec    11.60k   350.91    12.10k    85.29%
  117717 requests in 5.10s, 470.15MB read
Requests/sec:  23082.99
Transfer/sec:     92.19MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   681.95us   96.32us   2.60ms   89.34%
    Req/Sec    11.76k   138.81    12.07k    73.53%
  119368 requests in 5.10s, 476.75MB read
Requests/sec:  23407.17
Transfer/sec:     93.49MB
+ sleep 1
+ echo -e '\n\napisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.86ms  162.46us   7.24ms   88.76%
    Req/Sec     9.33k     1.17k   19.40k    93.07%
  93769 requests in 5.10s, 380.95MB read
Requests/sec:  18389.21
Transfer/sec:     74.71MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   845.31us  144.46us   4.37ms   91.35%
    Req/Sec     9.50k   281.09     9.81k    90.20%
  96473 requests in 5.10s, 391.94MB read
Requests/sec:  18916.99
Transfer/sec:     76.85MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 1 worker'


fake empty apisix server: 1 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 1/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   498.02us   57.61us   2.61ms   89.51%
    Req/Sec    16.09k   619.64    16.87k    85.29%
  163367 requests in 5.10s, 650.14MB read
Requests/sec:  32033.32
Transfer/sec:    127.48MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   492.73us   57.29us   2.35ms   88.62%
    Req/Sec    16.26k   385.65    17.01k    87.25%
  165027 requests in 5.10s, 656.75MB read
Requests/sec:  32360.46
Transfer/sec:    128.78MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop
# 4 workers

apisix: 4 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   286.87us  174.65us   3.00ms   72.00%
    Req/Sec    28.68k     2.94k   35.21k    67.65%
  290908 requests in 5.10s, 1.13GB read
Requests/sec:  57042.36
Transfer/sec:    227.82MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   284.10us  177.81us   4.64ms   76.41%
    Req/Sec    28.94k     2.67k   33.68k    67.65%
  293746 requests in 5.10s, 1.15GB read
Requests/sec:  57598.31
Transfer/sec:    230.04MB
+ sleep 1
+ echo -e '\n\napisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":9,"key":"\/apisix\/routes\/1","modifiedIndex":9},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   342.74us  220.15us   5.38ms   75.84%
    Req/Sec    24.15k     2.44k   28.39k    72.55%
  245033 requests in 5.10s, 0.97GB read
Requests/sec:  48046.94
Transfer/sec:    195.20MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   352.29us  223.55us   3.37ms   70.11%
    Req/Sec    23.51k     2.47k   28.16k    69.61%
  238538 requests in 5.10s, 0.95GB read
Requests/sec:  46777.18
Transfer/sec:    190.04MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 4 worker'


fake empty apisix server: 4 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 4/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   154.62us  104.55us   4.60ms   98.92%
    Req/Sec    51.77k     1.05k   53.74k    71.57%
  525323 requests in 5.10s, 2.04GB read
Requests/sec: 103004.49
Transfer/sec:    409.92MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   152.16us   63.87us   3.56ms   93.89%
    Req/Sec    51.84k   795.29    53.94k    69.61%
  525822 requests in 5.10s, 2.04GB read
Requests/sec: 103113.01
Transfer/sec:    410.35MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop

@membphis
Copy link
Author

@cboitel I think we can copy them to APISIX issue: https://github.com/apache/incubator-apisix/issues

Then I will submit a new PR about optimization.

@hellolittlewei
Copy link

y

@Dhruv-Garg79
Copy link

@cboitel have you used any of these after your benchmarks? what is your recommendation for someone looking to adopt any of these?
I am interested in a simple nginx use case + auth for low latency and throughput up to 3000qps in the long run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment