Skip to content

Instantly share code, notes, and snippets.

@kingluo
Last active April 9, 2023 15:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kingluo/6b33e732948b511b2e524910a229dcc2 to your computer and use it in GitHub Desktop.
Save kingluo/6b33e732948b511b2e524910a229dcc2 to your computer and use it in GitHub Desktop.
APISIX limit-req plugin test

rate limit policy

  • [,rate]: pass
  • (rate, burst]: sleep
  • (burst,): reject

Test

Configure a fake upstream as APISIX itself, and calculate the actual success rate via serverless plugin.

Compare them with the benchmark metrics from client tools, e.g. wrk.

curl http://127.0.0.1:9180/apisix/admin/routes/test_upstream \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/test_upstream",
    "plugins": {
        "serverless-pre-function": {
            "phase": "rewrite",
            "functions" : ["
            local info = {ts=ngx.time(),cnt=0};
            return function()
                local cur = ngx.time();
                if cur - info.ts >= 1 then
                    ngx.log(ngx.WARN, \"rate=\", info.cnt/(cur-info.ts))
                    info.ts = cur
                    info.cnt = 0
                else
                    info.cnt = info.cnt + 1
                end
                ngx.say(\"ok\");
                return ngx.exit(200);
            end"]
        }
    }
}'

curl http://127.0.0.1:9180/apisix/admin/routes/test_limit_req \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/test_limit_req",
    "plugins": {
        "proxy-rewrite": {
            "uri": "/test_upstream"
        },
        "limit-req": {
            "rate": 3000,
            "burst": 5000,
            "rejected_code": 503,
            "key_type": "var",
            "key": "remote_addr"
        }
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:9080": 1
        }
    }
}'

ab -n 30000 -c 10 http://localhost:9080/test_limit_req
...
Server Software:        APISIX/3.1.0
Server Hostname:        localhost
Server Port:            9080

Document Path:          /test_limit_req
Document Length:        3 bytes

Concurrency Level:      30
Time taken for tests:   10.141 seconds
Complete requests:      30000
Failed requests:        0
Total transferred:      4230000 bytes
HTML transferred:       90000 bytes
Requests per second:    2958.34 [#/sec] (mean)
Time per request:       10.141 [ms] (mean)
Time per request:       0.338 [ms] (mean, across all concurrent requests)
Transfer rate:          407.35 [Kbytes/sec] received


./wrk -t10 -c100 -d30s http://localhost:9080/test_limit_req
Running 30s test @ http://localhost:9080/test_limit_req
  10 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    33.30ms    2.16ms  61.44ms   94.88%
    Req/Sec   301.50     10.05   460.00     84.97%
  90089 requests in 30.03s, 15.81MB read
Requests/sec:   2999.99
Transfer/sec:    539.06KB

error.log

2023/02/09 18:48:35 [warn] 2308626#2308626: *148088 [lua] [string "..."]:6: func(): rate=3002, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"
2023/02/09 18:48:36 [warn] 2308626#2308626: *148500 [lua] [string "..."]:6: func(): rate=3002, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"
2023/02/09 18:48:37 [warn] 2308626#2308626: *148489 [lua] [string "..."]:6: func(): rate=2996, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"
2023/02/09 18:48:38 [warn] 2308626#2308626: *148120 [lua] [string "..."]:6: func(): rate=2999, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"
2023/02/09 18:48:39 [warn] 2308626#2308626: *148119 [lua] [string "..."]:6: func(): rate=2998, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"
2023/02/09 18:48:40 [warn] 2308626#2308626: *148113 [lua] [string "..."]:6: func(): rate=3000, client: ::1, server: _, request: "GET /test_upstream HTTP/1.1", host: "localhost:9080"

k6

It's recommended to use k6.

An extensible load testing tool built for developer happiness

It's a javascript benchmark framework, which is easy to define custom metrics to reflect the success rate only.

Note that I suggest to install earlyoom to avoid freezeing the system due to high memory allocated by k6 running with big number of virtual users.

/opt/k6_httpbin_simple.js

import http from 'k6/http';
import { sleep } from 'k6';
import { Counter } from 'k6/metrics';

const successRate = new Counter('success_rate');

export default function () {
  //http.get('https://server.apisix.dev:9443/get');
  const resp = http.get('http://server.apisix.dev:9080/get');
  if (resp.status == 200) {
      successRate.add(1);
  }
  //sleep(0.03);
}
c put /routes/1 -d '{
    "uri":"/*",
    "plugins": {
        "serverless-pre-function": {
            "phase": "before_proxy",
            "functions": [
                "return function(conf,ctx)
                    ngx.say(\"bypass\")
                    return ngx.exit(ngx.HTTP_OK)
                end"
            ]
        },
        "limit-req": {
            "rate": 100,
            "burst": 300,
            "rejected_code": 503,
            "key_type": "var",
            "key": "remote_addr"
        }
    }
}'

k6 run -u 3000 -d 20s /opt/k6_httpbin_simple.js

From the output, success_rate is 99.348238/s, while the actual rate is 10451.173784/s, including the failed requests. The output matches the rate limit.

1681051904845

@kingluo
Copy link
Author

kingluo commented Feb 16, 2023

APISIX每秒的速率基本等于所有worker每秒报告的总和。burst设置为5000,而wrk的参数使得它没超过5000,也就没收到503的error,速率在5000内。limit-req的工作原理,它是根据key,例如这里的remote_addr,在共享内存创建一个统计,记录了实际速率,如果当前速率超过rate而小于burst,那么就sleep一下当前请求使得没那么快返回,达到限速的效果;如果当前速率超过burst,那么就返回503。

APISIX limit-req限制的是速率而非请求数,所以视乎客户端发出的流量,如果流量大,也会超过rate,而APISIX要确保的是在[rate, burst]这个区间去sleep,尽量保证小于burst。如果是收到了503,那么ab/wrk统计的结果是不准确的,会大于rate和burst,因为包含了快速返回的503的速率。所以要想办法在客户端排除503的影响才能看到真实的速率。所以上游统计的速率才是真实的,因为503会在APISIX这边直接返回了,不会到达上游,上游看到的永远是限速后的效果。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment