Skip to content

Instantly share code, notes, and snippets.

@membphis
Last active August 5, 2024 11:20
Show Gist options
  • Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Apache apisix benchmark script: https://github.com/iresty/apisix/blob/master/benchmark/run.sh
Kong beanchmark script:
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=example-service' \
--data 'host=127.0.0.1'
curl -i -X POST \
--url http://localhost:8001/services/example-service/routes \
--data 'paths[]=/hello'
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=rate-limiting" \
--data "config.hour=999999999999" \
--data "config.policy=local"
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=prometheus"
curl -i http://127.0.0.1:8000/hello/hello
wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
@cboitel
Copy link

cboitel commented Jun 23, 2020

Kong with 2 plugins

1 worker, 2 plugins

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.80ms    1.51ms  21.36ms   93.28%
    Req/Sec     1.03k   158.26     2.03k    91.09%
  10373 requests in 5.10s, 43.80MB read
Requests/sec:   2033.86
Transfer/sec:      8.59MB
[centos@cboitel1 ~]$ wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello; wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.10ms    1.74ms  20.82ms   94.07%
    Req/Sec     0.99k   126.75     1.11k    84.00%
  9891 requests in 5.00s, 41.76MB read
Requests/sec:   1977.53
Transfer/sec:      8.35MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.76ms    1.81ms  21.04ms   94.48%
    Req/Sec     1.04k   160.35     1.15k    87.00%
  10352 requests in 5.00s, 43.70MB read
Requests/sec:   2068.92
Transfer/sec:      8.73MB

2 workers, 2 plugins

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.33ms    1.83ms  18.91ms   74.33%
    Req/Sec     1.88k   261.92     2.58k    84.31%
  19104 requests in 5.10s, 80.66MB read
Requests/sec:   3744.91
Transfer/sec:     15.81MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.21ms    1.57ms  16.08ms   76.31%
    Req/Sec     1.92k   320.04     2.91k    75.25%
  19332 requests in 5.10s, 81.62MB read
Requests/sec:   3790.61
Transfer/sec:     16.00MB

Again huge performance drop with the 2 plugins scenarios : APISIX remains way ahead but now display a x8 factor instead of x10.

@membphis
Copy link
Author

Tests performed on a 4 core machines with 8Go RAM on CentOS 7

what is the version of the APISIX?

@membphis
Copy link
Author

and I think you can use a table to collect all of the result.

@cboitel
Copy link

cboitel commented Jun 24, 2020

Tests performed on a 4 core machines with 8Go RAM on CentOS 7

what is the version of the APISIX?

latest git clone of yesterday. Will perform tests with latest official release.

@cboitel
Copy link

cboitel commented Jun 24, 2020

and I think you can use a table to collect all of the result.

Will do as i was able to update my test machines to 8 CPU/16Gb RAM.

@membphis
Copy link
Author

@cboitel we'd better use latest official release (both apisix and kong).

we should run them on a cloud ENV, eg AWS, Google or Alibaba Cloud.
Then the other users are easy to confirm if our benchmark result is correct, that is very important.

@cboitel
Copy link

cboitel commented Jun 24, 2020

Improved tests for Kong

Global setup

  • Machine: VM with 8 CPU, 16Gb RAM, 100Gb Disk
  • OS:
    • CentOS 7 with latest patches installed (sudo yum update)
    • killall command installed (sudo yum install -y psmisc)
    • login with sudo priviledges (required for we installed software via RPM)

Kong setup

  • installed from Kong's RPM : see https://docs.konghq.com/install/centos/
  • db-less mode enabled and nginx_worker_processes defined (commented by default: default value is auto)
    • more simple to setup
    • don't expect performance issues for an in-memory cache is used
      # save original kong configuration file
      sudo cp -p /etc/kong/kong.conf /etc/kong/kong.conf.original
      
      # enable db-less mode
      cat << __EOF__ | sudo tee -a /etc/kong/kong.conf 
      # enable db-less mode
      database = off
      declarative_config = /etc/kong/kong.yml
      # default work setting
      nginx_worker_processes = auto
      __EOF__
      
  • install script used to run Kong tests:
    • run similar to run.sh from apisix with a few changes
    • tune reference (server provided in incubator-apisix/benchmark/server) server to use half of available CPUs
    • added performance tests of reference server to measure its raw performance
    • kong is reloaded between run (due to db-less mode and start with clean setup)
    • add uptime after each wrk to check load average over the last minute
    • wrk duration set to 60s (so we can check load average never exceeded the number of CPU available over the last minute)
tee kong-run.sh << __EOS__ && chmod +x kong-run.sh
#! /bin/bash -x 

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

let maxprocs=$(nproc --all)
let maxworkers=\$maxprocs/2
let worker_cnt=\$maxworkers
[ -n "\$1" ] && worker_cnt=\$1
echo -e "> worker_cnt=\$worker_cnt"
[ \$worker_cnt -gt \$maxworkers ] && echo -e "WARNING: workers count should not exit $maxworkers (half of CPUs)"

echo -e "> applying configuration to /etc/kong/kong.conf"
[ ! -e /etc/kong/kong.conf ] && echo -e "FATAL: /etc/kong/kong.conf file missing !" && exit 1
if [[ "\$(uname)" == "Darwin" ]]; then
    sudo sed  -i "" "s/^nginx_worker_processes\s*=.*/nginx_worker_processes=\$worker_cnt/g" /etc/kong/kong.conf || exit 1
else
    sudo sed  -i "s/^nginx_worker_processes\s*=.*/nginx_worker_processes=\$worker_cnt/g" /etc/kong/kong.conf || exit 1
fi

function cleanUp () {
    echo -e "> cleanup any wrk process: "
    pgrep -a wrk && sudo killall wrk
    echo -e "> cleanup any openresty process: " 
    pgrep -a openresty && sudo killall openresty
    echo -e "> cleanup any nginx process: " 
    pgrep -a nginx && sudo killall nginx
}
trap 'cleanUp' INT

function doWrk () {
        wrk -d 60 -c 16 \$1 && uptime
}

echo -e "\n\n#############################################\n> cleanup things just in case"
cleanUp

echo -e "> ensure a clone of incubator-apisix is available"
[ ! -d incubator-apisix ] && (git clone https://github.com/apache/incubator-apisix || exit 1)

echo -e "\n\n#############################################\n> starting benchmark server" 
echo -e "> adjusting server workers to \$worker_cnt"
if [[ "\$(uname)" == "Darwin" ]]; then
    sudo sed  -i "" "s/^worker_processes\s.*/worker_processes \$worker_cnt;/g" \$PWD/incubator-apisix/benchmark/server/conf/nginx.conf || exit 1
else
    sudo sed  -i "s/^worker_processes .*/worker_processes \$worker_cnt;/g" \$PWD/incubator-apisix/benchmark/server/conf/nginx.conf || exit 1
fi
mkdir -p incubator-apisix/benchmark/server/logs && sudo /usr/local/openresty/bin/openresty -p \$PWD/incubator-apisix/benchmark/server || exit 1
sleep 3
echo -e "> openresty processes:" && pgrep -a openresty &&
echo -e "> curling server:" && curl -i http://127.0.0.1:80/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:80/hello

echo -e "\n\n#############################################\nkong: \$worker_cnt worker + 1 upstream + no plugin"

# setup kong configuration
cat << __EOF__ | sudo tee /etc/kong/kong.yml || exit 1
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
__EOF__
# restart kong to ensure clean setup
sudo systemctl stop kong && sleep 2 && sudo systemctl start kong && sleep 2 && 
echo -e "> nginx processes:" && pgrep -a nginx &&
echo -e "> curling service:" && curl -i http://127.0.0.1:8000/hello/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:8000/hello/hello 

echo -e "\n\n#############################################\nkong: \$worker_cnt worker + 1 upstream + 2 plugins (limit-count + prometheus)"

# setup kong configuration
cat << __EOF__ | sudo tee /etc/kong/kong.yml || exit 1
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
__EOF__

# restart kong to ensure clean setup
sudo systemctl stop kong && sleep 2 && sudo systemctl start kong && sleep 2 && 
echo -e "> nginx processes:" && pgrep -a nginx &&
echo -e "> curling service:" && curl -i http://127.0.0.1:8000/hello/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:8000/hello/hello 

echo -e "\n\n#############################################\n> stopping benchmark server & kong"
sudo /usr/local/openresty/bin/openresty -p \$PWD/incubator-apisix/benchmark/server -s stop || exit 1
sudo systemctl stop kong || exit 1
echo -e "> left openresty processes:" && pgrep -a openresty
echo -e "> left wrk processes:" && pgrep -a wrk
__EOS__

Notes:

  • in tests runs, i had modified my script to:
    • set the number of workers for benchmark server to a fix value (half number of cpus) instead of settings to the same number of workers in kong
    • it didn't change the game

Running kong benchmarks

Running for a number of worker starting at 1 and doubling but never exceeding the number of CPU on machine /2 :

let maxworkers=$(nproc --all)/2; let i=1; while [ $i -le $maxworkers ]; do ./kong-run.sh $i > kong-run-${i}-worker.out; let i=$i*2; done

See attached result files

Summary:

Req/s Workers Req/s Avg Stdev Max
benchmark server 1 44192.10 363.55us 133.65us 8.16ms
benchmark server 2 86072.74 181.45us 30.60us 1.80ms
benchmark server 4 138865.40 85.16us 57.52us 7.46ms
kong no plugin 1 14104.31 1.16ms 446.01us 16.20ms
kong no plugin 2 25611.06 664.92us 476.94us 10.43ms
kong no plugin 4 44990.11 395.17us 346.85us 10.87ms
kong 2 plugins 1 1947.98 8.23ms 1.85ms 27.94ms
kong 2 plugins 2 3546.25 4.63ms 2.74ms 35.79ms
kong 2 plugins 4 6153.83 2.71ms 1.80ms 23.27ms
  1. benchmark server (a simple openresty app):
    • shows excellent performance and scalability
    • as such, it is not a bottleneck in our tests
  2. kong shows similar scalability but drops your throughput significantly and adds latency
    • you process 3 times less with no plugins enabled
    • you process more than 20 times less with rate+prometheus plugins enabled
    • it also adds latency and increases response time variation by the same factors

detailed output results

kong-run-1-worker.out

> worker_cnt=1
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2162 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2163 nginx: worker process                                                               
2164 nginx: worker process                                                               
2165 nginx: worker process                                                               
2166 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:11:22 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    84.51us   35.08us   2.96ms   69.75%
    Req/Sec    69.76k     4.49k   84.13k    65.72%
  8345719 requests in 1.00m, 32.44GB read
Requests/sec: 138862.33
Transfer/sec:    552.75MB
 15:12:22 up 37 min,  1 user,  load average: 2.97, 0.87, 0.51


#############################################
kong: 1 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2209 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2219 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:12:26 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.15ms  405.87us  10.98ms   93.89%
    Req/Sec     7.15k     1.19k   14.10k    69.94%
  854936 requests in 1.00m, 3.39GB read
Requests/sec:  14225.26
Transfer/sec:     57.78MB
 15:13:26 up 39 min,  1 user,  load average: 1.96, 0.98, 0.57


#############################################
kong: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2267 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2276 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:13:31 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2789
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 3
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.05ms    1.81ms  27.75ms   93.76%
    Req/Sec     1.00k   141.86     1.13k    88.08%
  119578 requests in 1.00m, 504.96MB read
Requests/sec:   1992.20
Transfer/sec:      8.41MB
 15:14:31 up 40 min,  2 users,  load average: 1.43, 1.00, 0.61


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

kong-run-2-worker.out

> worker_cnt=2
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2339 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2340 nginx: worker process                                                               
2341 nginx: worker process                                                               
2342 nginx: worker process                                                               
2343 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:14:34 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    84.86us   49.21us   8.41ms   91.63%
    Req/Sec    69.74k     6.01k   84.83k    65.97%
  8339220 requests in 1.00m, 32.42GB read
Requests/sec: 138755.04
Transfer/sec:    552.33MB
 15:15:34 up 41 min,  2 users,  load average: 3.50, 1.65, 0.85


#############################################
kong: 2 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2386 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2396 nginx: worker process                                                 
2397 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:15:39 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   648.25us  462.78us   9.90ms   79.43%
    Req/Sec    13.15k     2.61k   20.57k    69.75%
  1569932 requests in 1.00m, 6.23GB read
Requests/sec:  26165.35
Transfer/sec:    106.27MB
 15:16:39 up 42 min,  2 users,  load average: 3.03, 1.84, 0.97


#############################################
kong: 2 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2444 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2453 nginx: worker process                                                 
2454 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:16:43 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2597
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.75ms    2.85ms  26.57ms   68.83%
    Req/Sec     1.73k   485.52     2.84k    65.00%
  207064 requests in 1.00m, 0.85GB read
Requests/sec:   3450.30
Transfer/sec:     14.57MB
 15:17:43 up 43 min,  2 users,  load average: 2.18, 1.82, 1.02


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

kong-run-4-worker.out

> worker_cnt=4
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2496 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2497 nginx: worker process                                                               
2498 nginx: worker process                                                               
2499 nginx: worker process                                                               
2500 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:17:46 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    81.63us   34.61us   2.46ms   69.62%
    Req/Sec    72.40k     6.37k   85.67k    64.75%
  8645846 requests in 1.00m, 33.61GB read
Requests/sec: 144095.81
Transfer/sec:    573.59MB
 15:18:46 up 44 min,  2 users,  load average: 4.21, 2.48, 1.30


#############################################
kong: 4 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2542 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2552 nginx: worker process                                                 
2553 nginx: worker process                                                 
2554 nginx: worker process                                                 
2555 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:18:51 GMT
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   400.60us  358.62us  11.00ms   90.56%
    Req/Sec    22.36k     4.06k   32.62k    69.75%
  2669115 requests in 1.00m, 10.59GB read
Requests/sec:  44483.59
Transfer/sec:    180.68MB
 15:19:51 up 45 min,  2 users,  load average: 5.08, 3.03, 1.57


#############################################
kong: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2604 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2613 nginx: worker process                                                 
2614 nginx: worker process                                                 
2615 nginx: worker process                                                 
2616 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:19:55 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2405
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 2
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.65ms    1.79ms  20.89ms   74.32%
    Req/Sec     3.19k   786.40     5.93k    68.11%
  381803 requests in 1.00m, 1.57GB read
Requests/sec:   6352.66
Transfer/sec:     26.83MB
 15:20:55 up 46 min,  2 users,  load average: 4.41, 3.20, 1.72


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

@cboitel
Copy link

cboitel commented Jun 24, 2020

Will post tomorrow similar thing for APISIX 1.3.0 installed with RPM as well (will adapt my scripts/...).

Hope it helps.

@membphis
Copy link
Author

Here is my result:

platform: aliyun cloud, 8 vCPU 32 GiB ecs.hfg5.2xlarge
apisix version: apache/apisix@492fa71

# 1 worker

apisix: 1 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":5,"key":"\/apisix\/routes\/1","modifiedIndex":5},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   692.12us  117.12us   4.72ms   89.93%
    Req/Sec    11.60k   350.91    12.10k    85.29%
  117717 requests in 5.10s, 470.15MB read
Requests/sec:  23082.99
Transfer/sec:     92.19MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   681.95us   96.32us   2.60ms   89.34%
    Req/Sec    11.76k   138.81    12.07k    73.53%
  119368 requests in 5.10s, 476.75MB read
Requests/sec:  23407.17
Transfer/sec:     93.49MB
+ sleep 1
+ echo -e '\n\napisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.86ms  162.46us   7.24ms   88.76%
    Req/Sec     9.33k     1.17k   19.40k    93.07%
  93769 requests in 5.10s, 380.95MB read
Requests/sec:  18389.21
Transfer/sec:     74.71MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   845.31us  144.46us   4.37ms   91.35%
    Req/Sec     9.50k   281.09     9.81k    90.20%
  96473 requests in 5.10s, 391.94MB read
Requests/sec:  18916.99
Transfer/sec:     76.85MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 1 worker'


fake empty apisix server: 1 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 1/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   498.02us   57.61us   2.61ms   89.51%
    Req/Sec    16.09k   619.64    16.87k    85.29%
  163367 requests in 5.10s, 650.14MB read
Requests/sec:  32033.32
Transfer/sec:    127.48MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   492.73us   57.29us   2.35ms   88.62%
    Req/Sec    16.26k   385.65    17.01k    87.25%
  165027 requests in 5.10s, 656.75MB read
Requests/sec:  32360.46
Transfer/sec:    128.78MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop
# 4 workers

apisix: 4 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   286.87us  174.65us   3.00ms   72.00%
    Req/Sec    28.68k     2.94k   35.21k    67.65%
  290908 requests in 5.10s, 1.13GB read
Requests/sec:  57042.36
Transfer/sec:    227.82MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   284.10us  177.81us   4.64ms   76.41%
    Req/Sec    28.94k     2.67k   33.68k    67.65%
  293746 requests in 5.10s, 1.15GB read
Requests/sec:  57598.31
Transfer/sec:    230.04MB
+ sleep 1
+ echo -e '\n\napisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":9,"key":"\/apisix\/routes\/1","modifiedIndex":9},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   342.74us  220.15us   5.38ms   75.84%
    Req/Sec    24.15k     2.44k   28.39k    72.55%
  245033 requests in 5.10s, 0.97GB read
Requests/sec:  48046.94
Transfer/sec:    195.20MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   352.29us  223.55us   3.37ms   70.11%
    Req/Sec    23.51k     2.47k   28.16k    69.61%
  238538 requests in 5.10s, 0.95GB read
Requests/sec:  46777.18
Transfer/sec:    190.04MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 4 worker'


fake empty apisix server: 4 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 4/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   154.62us  104.55us   4.60ms   98.92%
    Req/Sec    51.77k     1.05k   53.74k    71.57%
  525323 requests in 5.10s, 2.04GB read
Requests/sec: 103004.49
Transfer/sec:    409.92MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   152.16us   63.87us   3.56ms   93.89%
    Req/Sec    51.84k   795.29    53.94k    69.61%
  525822 requests in 5.10s, 2.04GB read
Requests/sec: 103113.01
Transfer/sec:    410.35MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop

@membphis
Copy link
Author

@cboitel I think we can copy them to APISIX issue: https://github.com/apache/incubator-apisix/issues

Then I will submit a new PR about optimization.

@leeonfu
Copy link

leeonfu commented Mar 2, 2022

y

@Dhruv-Garg79
Copy link

@cboitel have you used any of these after your benchmarks? what is your recommendation for someone looking to adopt any of these?
I am interested in a simple nginx use case + auth for low latency and throughput up to 3000qps in the long run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment