Skip to content

Instantly share code, notes, and snippets.

@icy
Forked from serogers/slow_http_attacks.md
Last active December 10, 2021 11:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save icy/31ee24425a63067819e26ce3dd4ea929 to your computer and use it in GitHub Desktop.
Save icy/31ee24425a63067819e26ce3dd4ea929 to your computer and use it in GitHub Desktop.

Overview

This is an investigation into our susceptibility to Slow HTTP Attacks. These attacks take advantage of how typical web servers process requests and employ several strategies to achieve the same result. By keeping connections open longer than normal, they thereby prevent new connections from being opened and the server will either hang or return 503.

The question is less about whether we are vulnerable, but rather to what extent. Any server can be DoS’d, it’s just a matter of how easy it is to execute.

Articles

Tools

  • slowhttptest: mimic a variety of slow HTTP attacks
  • goloris: Mimic a slow HTTP attack against Nginx

Types of Attacks

Below are the various types of Slow HTTP attacks that were looked at as part of this investigation. The examples were made using the slowhttptest tool, which simulates a variety of attacks.

Slow Headers (Slowloris)

Slow down the request headers, which keeps the connection open until all the headers are received.

slowhttptest -u https://staging.trakstar.com/login -c 10000 -H -i 50 -r 1000 -t GET -x 24 -p 10 -g -o ~/Desktop/slow_headers

As you can see in these results, even with a large number of hanging connections, we never enter a state where Nginx is unable to respond to new connections/requests.

Slow Body

Slow down the request body, which keeps the connection open until the entire body is received.

slowhttptest -u https://staging.trakstar.com/user_sessions -c 10000 -B -i 50 -r 1000 -s 80000000 -t POST -x 10 -p 10 -g -o ~/Desktop/slow_body

As you can see in these results, even with a large number of hanging connections, we never enter a state where Nginx is unable to respond to new connections/requests.

Slow Read

Request content from the server and slowly stream it to the client, which keeps the connection open until the content has been received.

slowhttptest -c 10000 -X -g -o ~/Desktop/slow_read_2 -r 500 -w 512 -y 1024 -n 5 -z 32 -k 3 -u https://staging.trakstar.com/login -p 3 

As you can see in these results, it’s possible to cause some service issues. However it has more to do with the number of requests than how long it takes to consider the request complete. You can see that the requests don’t complete for some time but the service is available before they start to drop off.

Large Range

Send a Range header with a very large value, which can cause a buffer overflow.

slowhttptest -u https://staging.trakstar.com/login -c 1000 -R -i 110 -r 200 -s 8192 -t POST -x 10 -p 10 -g -o ~/Desktop/range

Summary

Based on my research, we are not at high risk of having our service blocked due to Slow HTTP attacks. Here are the reasons why:

  • We use Nginx, which is generally less vulnerable due to its threading and non-blocking IO
  • We have a high number of allowed connections: 8192 per app host, which makes it more difficult to execute an attack
  • We discard requests that don’t send updates to the body or headers within 60 seconds (could be lower, but limit exists)
  • We discard requests that take more than 120 seconds to complete
  • We whitelist which HTTP verbs are available for any given application URL and reject requests that don’t fit
  • We limit the request rate per IP and session

Recommendations

Here are some recommendations if we want to further limit our exposure to Slow HTTP Attacks:

  • Lower client_body_timeout and client_header_timeout from the default 60s to something more in line with expected traffic. This is the amount of time between reads of body & header from the client.
  • Set a limit on the number of connections and/or number of requests per client. This is something we do in Rails with Rack::Attack, but it does not protect against clients from opening a high number of connections.
  • Lower the client_max_body_size from 100m to something much smaller for any routes that don’t need file upload (default is 1m).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment