Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Rate limiting with HAproxy

Introduction

So HAProxy is primalery a load balancer an proxy for TCP and HTTP. But it may act as a traffic regulator. It may also be used as a protection against DDoS and service abuse, by maintening a wide variety of statistics (IP, URL, cookie) and when abuse is happening, action as denying, redirecting to other backend may undertaken (haproxy ddos config, haproxy ddos)

The various HAProxy configuration files are in this repository: https://github.com/procrastinatio/haproxy-docker/tree/more_examples

Simple application

Simple haproxy.cfg proxying everything on a single backend, mf-chsdi3.int.bgdi.ch, only setting a custom header:

defaults
  mode http
  timeout connect 4000ms
  timeout client 50000ms
  timeout server 50000ms

  stats enable
  stats refresh 5s
  stats show-node
  stats uri  /stats/haproxy

global
  log 127.0.0.1:1514 local0 debug
  user haproxy
  group haproxy

frontend fe_app
  log global
  log-format "%ci:%cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r"

  reqadd Referer:\ http://zorba.geo.admin.ch

  bind 0.0.0.0:8080 name http
 
  default_backend be_app_stable

backend be_app_stable
  log global
  http-request replace-value Host .* mf-chsdi3.int.bgdi.ch
  server upstream_server mf-chsdi3.int.bgdi.ch:80 maxconn 512

Source IP based limit

In this example, we modified the config, to track by origin IP for overenthusiastic users.

https://blog.serverfault.com/2010/08/26/1016491873/

  • New backend be_429_slow_down which will respond after 2s with an http code be_429_slow_down
  • If an source IP is making more than 3 requests in a period of 10 seconds, it will be labeld as aubsive
  • User has to stay quiet for 30seconds to be unbanned.

See also better rate limiting


   frontend fe_app
       [...]
   
       # table used to store behaviour of source IPs (type is ip)
       stick-table type ip size 200k expire 30s store  gpc0,conn_rate(10s),http_req_rate(10s)
   
       # IPs that have gpc0 > 0 are blocked until the go away for at least 30 seconds
       acl source_is_abuser src_get_gpc0 gt 0
       # Instead of redirecting to slowing down backend, we may also reject any request
       #tcp-request connection reject if source_is_abuser
   
       # connection rate abuses get blocked  (3 requests in 10s, then blocked for
       # 30s)
       acl conn_rate_abuse  sc1_conn_rate gt 3
       acl mark_as_abuser   sc1_inc_gpc0  ge 0
       tcp-request connection track-sc1 src
       # Same as above, we are nice, we do not reject all request,
       # but these count also as access, so counter is not reset
       #tcp-request connection reject if conn_rate_abuse mark_as_abuser
   
   
       reqadd Referer:\ http://zorba.geo.admin.ch
   
       
       use_backend be_429_slow_down if conn_rate_abuse mark_as_abuser source_is_abuser

   
       bind 0.0.0.0:8080 name http
    
       default_backend be_app_stable
            
   backend be_429_slow_down
       
       timeout tarpit 2s
       errorfile 500 /var/local/429.http
       http-request tarpit

The important line is:

stick-table type ip size 200k expire 30s store  gpc0,conn_rate(10s),http_req_rate(10s)

It declares a talbe of type ip holding up to 200'000 entries. Each entry consists of a source ip (50 bytes), the connection rate and the http request rate, both about 12 bytes each. Hence the table size will be at maximum 200'000 ⋅ 74 bytes, 14.7 Mbytes of storage for this table (in memory). An entry is deleted after 30 seconds if not added or updated. As a rule of a thumb, it should be at least twice the longest rate argument, for a smooth average. The argument for connection rate and http request rate are how long to calculate the average over (10 seconds).

Tesing with one request every 4 seconds:

for i in {0..10}; do sleep 4; curl  -s -o /dev/null  -w "%{http_code}\n" "http://localhost:8080/rest/services/height?easting=600000&northing=200000" | ts ; done

Oct 15 19:00:23 200
Oct 15 19:00:27 200
Oct 15 19:00:31 200
Oct 15 19:00:35 200
Oct 15 19:00:39 200
Oct 15 19:00:43 200
Oct 15 19:00:47 200
Oct 15 19:00:51 200
Oct 15 19:00:55 200
Oct 15 19:00:59 200
Oct 15 19:01:03 200

With with one request every second in two parallel threads, the fourth and all subsequent requests are discard

seq 30 | parallel  -n0 -j2 "sleep 1; curl -s -o \/dev\/null -w \"%{http_code}\n\"  'http://localhost:8080/rest/services/height?easting=600000&northing=200000' | ts"
 
Oct 15 21:28:54 200
Oct 15 21:28:54 200
Oct 15 21:28:55 200
Oct 15 21:28:57 429
Oct 15 21:28:58 429
Oct 15 21:29:00 429
Oct 15 21:29:01 429
Oct 15 21:29:03 429
Oct 15 21:29:04 429
Oct 15 21:29:06 429
Oct 15 21:29:07 429
Oct 15 21:29:09 429
Oct 15 21:29:10 429
Oct 15 21:29:12 429

Entrée log:

2017/10/11 12:21:01 <local0,info> 172.17.0.1:51974 [11/Oct/2017:12:21:01.485] fe_app blocked-proxy-ua/<NOSRV> 0/-1/-1/-1/0 503 444 - - SC-- 0/0/0/0/0 0/0 {-,"",""} "HEAD / HTTP/1.1"
2017/10/11 12:24:53 <local0,info> 172.17.0.1:51978 [11/Oct/2017:12:24:53.497] fe_app be_app_stable/upstream_server 0/0/1/3/4 200 511 - - ---- 1/1/0/1/0 0/0 {-,"",""} "HEAD / HTTP/1.1"



seq 100 | parallel  -n0 -j4 "curl -s -o \/dev\/null -w \"%{http_code}\n\"  'curl -X POST --data @profile.json 'http://localhost:8080/rest/services/profile.json'"

Same goes for all requests:


    seq 500 | parallel  -n0 -j4 "curl -s -o \/dev\/null -w \"%{http_code}\n\" -X
    POST --data @profile.json 'http://localhost:8080/rest/services/profile.json'"
    seq 500 | parallel  -n0 -j12 "curl -s -o \/dev\/null -w
           \"%{http_code}\n\" -X POST --data @profile.json
          'http://localhost:8080/rest/services/profile.json'"

Backend rate limit

Another way to limit the traffic is to define per backend the session rate and number of connections it may accept.


backend be_app_stable
  # 
  acl too_fast be_sess_rate gt 10
  acl too_many be_conn gt 10
  tcp-request inspect-delay 3s
  tcp-request content accept if ! too_fast or ! too_many
  tcp-request content accept if WAIT_END

With this setting, 10 concurrent connection are still allowed (response time is about 7-8 ms)

ab -n 500  -c 10  'http://localhost:8080/rest/services/height?easting=600000&northing=200000'


Server Software:        Apache/2.2.22
Server Hostname:        localhost
Server Port:            8080

Document Path:          /rest/services/height?easting=600000&northing=200000
Document Length:        18 bytes

Concurrency Level:      10
Time taken for tests:   0.550 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      346280 bytes
HTML transferred:       9000 bytes
Requests per second:    908.99 [#/sec] (mean)
Time per request:       11.001 [ms] (mean)
Time per request:       1.100 [ms] (mean, across all concurrent requests)
Transfer rate:          614.78 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       2
Processing:     5   10  11.0      8     116
Waiting:        5    9  11.0      7     116
Total:          5   10  11.0      8     116

Percentage of the requests served within a certain time (ms)
  50%      8
  66%      9
  75%     10
  80%     11
  90%     13
  95%     15
  98%     18
  99%    104
 100%    116 (longest request)

Going with 12 concurent connections, many connections are rejected

ab -n 500  -c 12  'http://localhost:8080/rest/services/height?easting=600000&northing=200000'
 
 
Server Software:        Apache/2.2.22
Server Hostname:        localhost
Server Port:            8080

Document Path:          /rest/services/height?easting=600000&northing=200000
Document Length:        18 bytes

Concurrency Level:      12
Time taken for tests:   54.715 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      345500 bytes
HTML transferred:       9000 bytes
Requests per second:    9.14 [#/sec] (mean)
Time per request:       1313.171 [ms] (mean)
Time per request:       109.431 [ms] (mean, across all concurrent requests)
Transfer rate:          6.17 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       2
Processing:     4 1313 1489.5     27    3186
Waiting:        4 1312 1489.6     27    3186
Total:          4 1313 1489.5     27    3186

Percentage of the requests served within a certain time (ms)
  50%     27
  66%   3006
  75%   3007
  80%   3008
  90%   3017
  95%   3042
  98%   3082
  99%   3118
 100%   3186 (longest request)

Using Headers

Sometimes HAProxy may not have access to source IP of requests, because it lays behing a CDN or an AWS ELB. So, if these elements are correctly setting an X-Forwarded-For header, we may eventually use it. Let see

Configuration file haproxy_x_forwarded_for.cfg

frontend fe_app
  [...]

  # Check restricted network
  acl restricted_network src 90.80.70.60 # Home network
  acl restricted_network hdr_ip(X-Forwarded-For) -f /etc/haproxy/acl_restricted_network

  use_backend blocked-proxy-ua if !restricted_network

In this case we restrict access to one ip address (90.80.70.60) or to requests originating from '1.2.3.4/32' or '5.6.7.8/32' (acl_restricted_network)

Requesting from this machine is not allowed:

[root@ip-10-220-5-68] curl -I  http://localhost:8080
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

But faking the X-Forwared-For header makes it a valid request:

[root@ip-10-220-5-68] curl -I -H "X-Forwarded-For: 1.2.3.4" http://localhost:8080
HTTP/1.1 200 OK
Access-Control-Allow-Headers: x-requested-with, Content-Type, origin, authorization, accept, client-security-token
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Age: 0

Rate limit based on header values

See https://github.com/dschneller/haproxy-http-based-rate-limiting/blob/master/haproxy.cfg

Configuration file haproxy_x_forwarded_for.cfg


  stick-table type string size 100k expire 30s store gpc0_rate(3s)
  acl document_request path_beg -i /rest/services/height
  acl too_many_uploads_by_user sc0_gpc0_rate() gt 2
  acl mark_seen sc0_inc_gpc0 gt 0
 

  tcp-request content track-sc0 hdr(X-Forwarded-For) if METH_GET document_request
  reqadd Referer:\ http://zorba.geo.admin.ch
 
  use_backend be_429_slow_down if mark_seen too_many_uploads_by_user 
 

In this case, we must use a stick-table of type string to store the X-Forwarded-For header (or is there a way to convert it to ip?). We also restrict the rate limiting to request GET on path begining with /rest/services/height

ab -n 500  -c 12  'http://localhost:8080/rest/services/height?easting=600000&northing=200000'



ab -n 500  -c 12  -H "X-Forwared-For: 6.6.6.6"  'http://localhost:8080/rest/services/height?easting=600000&northing=200000'


for i in {0..10}; do sleep 2; curl  -s -o /dev/null -H "X-Forwarded-For: 1.2.3.4" -w "%{http_code}\n" "http://localhost:8080/rest/services/height?easting=600000&northing=200000"; done
200
200
200
200
200
200
200
200
200
200
200


for i in {0..10}; do curl  -s -o /dev/null -H "X-Forwarded-For: 1.2.3.4" -w "%{http_code}\n" "http://localhost:8080/rest/services/height?easting=600000&northing=200000"; done
200
200
429
429
429
200
200
429
429
429
200

ab -n 100  -c 4  -H "X-Forwarded-For: 6.6.6.6"  'http://localhost:8080/rest/services/height?easting=600000&northing=200000'

 
 
Server Hostname:        localhost
Server Port:            8080

Document Path:          /rest/services/height?easting=600000&northing=200000
Document Length:        18 bytes

Concurrency Level:      4
Time taken for tests:   50.104 seconds
Complete requests:      100
Failed requests:        98
   (Connect: 0, Receive: 0, Length: 98, Exceptions: 0)
Total transferred:      15500 bytes
HTML transferred:       36 bytes
Requests per second:    2.00 [#/sec] (mean)
Time per request:       2004.173 [ms] (mean)
Time per request:       501.043 [ms] (mean, across all concurrent requests)
Transfer rate:          0.30 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       1
Processing:     6 1964 281.0   2004    2006
Waiting:        6 1963 281.0   2004    2006
Total:          7 1964 280.9   2004    2006

Percentage of the requests served within a certain time (ms)
  50%   2004
  66%   2005
  75%   2005
  80%   2005
  90%   2005
  95%   2005
  98%   2005
  99%   2006
 100%   2006 (longest request)
 
 2017/10/11 17:29:22 <local0,info> 172.17.0.1:55866 [11/Oct/2017:17:29:22.777] fe_app be_app_stable/upstream_server 0/0/1/4/5 200 694 - - ---- 0/0/0/0/0 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:22 <local0,info> 172.17.0.1:55872 [11/Oct/2017:17:29:22.783] fe_app be_app_stable/upstream_server 0/0/1/4/5 200 694 - - ---- 0/0/0/0/0 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:24 <local0,info> 172.17.0.1:55878 [11/Oct/2017:17:29:22.789] fe_app be_429_slow_down/<NOSRV> -1/2003/-1/-1/2002 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:26 <local0,info> 172.17.0.1:55882 [11/Oct/2017:17:29:24.792] fe_app be_429_slow_down/<NOSRV> -1/2003/-1/-1/2002 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:28 <local0,info> 172.17.0.1:55886 [11/Oct/2017:17:29:26.795] fe_app be_429_slow_down/<NOSRV> -1/2002/-1/-1/2001 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:28 <local0,info> 172.17.0.1:55890 [11/Oct/2017:17:29:28.797] fe_app be_app_stable/upstream_server 0/0/1/3/4 200 694 - - ---- 0/0/0/0/0 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:28 <local0,info> 172.17.0.1:55896 [11/Oct/2017:17:29:28.802] fe_app be_app_stable/upstream_server 0/0/1/4/5 200 694 - - ---- 0/0/0/0/0 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:30 <local0,info> 172.17.0.1:55902 [11/Oct/2017:17:29:28.808] fe_app be_429_slow_down/<NOSRV> -1/2003/-1/-1/2002 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:32 <local0,info> 172.17.0.1:55906 [11/Oct/2017:17:29:30.811] fe_app be_429_slow_down/<NOSRV> -1/2003/-1/-1/2002 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"
2017/10/11 17:29:33 <local0,info> 172.17.0.1:55910 [11/Oct/2017:17:29:32.814] fe_app be_429_slow_down/<NOSRV> -1/891/-1/-1/891 500 144 - - PT-- 0/0/0/0/3 0/0 {-,"",""} "GET /rest/services/height?easting=600000&northing=200000 HTTP/1.0"

GeoIP with source IP/Header

We may try to get the MaxMind GeoIP database (legacy format) and use it with HAProxy (geolocation and haproxy)

Using whitelist/blacklist

We define an hypothetic whitelist, with a single network (a bluewin IP)

$ cat CH.txt 
85.5.53.0/21

and in the haproxy.cfg

    acl acl_CH src -f /usr/local/etc/haproxy/CH.txt
    http-request deny if acl_CH

Let's test it.

From home, checking my IP is really 85.5.53.104:

marco at ultrabook in ~
$ curl ifconfig.co
85.5.53.104

it will be denied:

marco at ultrabook in ~
$ curl -I "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.0 403 Forbidden                                                                                                                                                                       
Cache-Control: no-cache                                                                                                                                                                      
Connection: close                                                                                                                                                                            
Content-Type: text/html      

And from docker0, which is an AWS instance in Ireland, everything is fine:

2d [ltmom@ip-10-220-4-246:~] $ curl ifconfig.co
54.194.151.117



2d [ltmom@ip-10-220-4-246:~] $ curl -I "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.1 200 OK
Access-Control-Allow-Headers: x-requested-with, Content-Type, origin, authorization, accept, client-security-token
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Age: 0

Checking an entry in the GeoIP database:

   # /usr/bin/mmdblookup --file GeoLite2-Country.mmdb --ip 52.48.106.118 registered_country names en
   "Ireland" <utf8_string>

Blocking whole countries

Considering a geoip.txt file as:

$ head geoip.txt 
1.0.0.0/24 AU
1.0.1.0/24 CN
1.0.2.0/23 CN
1.0.4.0/22 AU
1.0.8.0/21 CN
1.0.16.0/20 JP
1.0.32.0/19 CN
1.0.64.0/18 JP
1.0.128.0/17 TH
1.1.0.0/24 CN

We may define the follwing acl, allowing only Iran, sorry Ireland and Switzerland

# For source ip 
acl acl_geoloc_ch_ie src,map_ip(/usr/local/etc/haproxy/geoip.txt) -m reg -i (IE|CH)
# For x-forwarded-for
acl acl_geoloc_ch_ie hdr(X-Forwarded-For),map_ip(/usr/local/etc/haproxy/geoip.txt) -m reg -i (IE|CH)

http-request deny if !acl_geoloc_ch_ie

GeoIP set by CloudFlare

Not tested, but it looks like some CDN like Cloudflare are setting GeoIP headers(haproxy cloudflare) which be used by HAProxy

It looks like ClouFlare is adding some custom header:

_SERVER["HTTP_CF_IPCOUNTRY"]      CN
_SERVER["HTTP_CF_RAY"]            17da8155355b0520-SEA
_SERVER["HTTP_CF_VISITOR"]        {"scheme":"http"}
_SERVER["HTTP_CF_CONNECTING_IP"]  XX.YY.ZZ.00

Accessing the sticky-table

# Colophon

This document was created with pandoc, with the following commands:

/usr/local/bin/pandoc --smart --standalone  -f markdown --latex-engine=xelatex   --number-sections --variable mainfont="Latin Modern Roman"   \
  -V fontsize=9pt  -V papersize:a4  --toc  -F pandoc-citeproc --metadata link-citations -o "haproxy_rate_limiting.pdf" "haproxy_rate_limiting.md"
head GeoIPCountryWhois.csv
"1.0.0.0","1.0.0.255","16777216","16777471","AU","Australia"
"1.0.1.0","1.0.3.255","16777472","16778239","CN","China"
"1.0.4.0","1.0.7.255","16778240","16779263","AU","Australia"
"1.0.8.0","1.0.15.255","16779264","16781311","CN","China"
"1.0.16.0","1.0.31.255","16781312","16785407","JP","Japan"
"1.0.32.0","1.0.63.255","16785408","16793599","CN","China"



$ cat CH.txt 
85.5.53.0/21



acl acl_CH src -f /usr/local/etc/haproxy/CH.txt
  http-request deny if acl_CH


marco at ultrabook in ~
$ curl ifconfig.co
85.5.53.104
marco at ultrabook in ~
$ curl -I "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.0 403 Forbidden                                                                                                                                                                       
Cache-Control: no-cache                                                                                                                                                                      
Connection: close                                                                                                                                                                            
Content-Type: text/html      


2d [ltmom@ip-10-220-4-246:~] $ curl ifconfig.co
54.194.151.117
2d [ltmom@ip-10-220-4-246:~] $ curl -I "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.1 200 OK
Access-Control-Allow-Headers: x-requested-with, Content-Type, origin, authorization, accept, client-security-token
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Age: 0


marco at ultrabook in ~
$ curl -I -H "X-Forwarded-For: 52.48.106.118" "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.1 200 OK
Access-Control-Allow-Headers: x-requested-with, Content-Type, origin, authorization, accept, client-security-token
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Age: 0
Cache-Control: max-age=0, must-revalidate, no-cache, no-store
Content-Type: application/json; charset=UTF-8
Date: Fri, 13 Oct 2017 19:08:18 GMT
Expires: Fri, 13 Oct 2017 19:08:18 GMT
Last-Modified: Fri, 13 Oct 2017 19:08:18 GMT
Pragma: no-cache
Server: Apache/2.2.22 (Debian)
Vary: Accept-Encoding
Via: 1.1 varnish-v4
X-Cache: MISS
X-UA-Compatible: IE=Edge
X-Varnish: 827075497

marco at ultrabook in ~
$ curl -I -H "X-Forwarded-For: 8.8.8.8" "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html

marco at ultrabook in ~
$ curl -I "http://haproxy.dubious.cloud/rest/services/height?easting=600000&northing=200000"
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.