Skip to content

Instantly share code, notes, and snippets.

@pein0119 pein0119/README.md forked from magnetikonline/README.md
Last active Aug 29, 2015

Embed
What would you like to do?

Nginx FastCGI response buffer sizes

By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk. This process is also explained at the Nginx ngx_http_fastcgi_module page document page.

Since disk is slow and memory is fast the aim is to get as many FastCGI responses passing through memory only. On the flip side we don't want to set an excessively large buffer as they are created and sized on a per request basis (it's not shared).

The related Nginx options are:

  • fastcgi_buffering appeared in Nginx 1.5.6 (1.6.0 stable) and can be used to turn buffering completely on/off. It's on by default.
  • fastcgi_buffer_size is a special buffer space used to hold the very first part of the FastCGI response, which is going to be the HTTP response header. Typically you shouldn't need to adjust this - even if Nginx defaults to a small page size of 4KB (your platform will determine if 4/8k) it should fit a typical full HTTP header. The one exception I have seen are some frameworks that push very large amounts of cookie data back to the client, usually during a user verification process/login - blowing out the buffer and causing 500 errors. In those instances you will need to increase this buffer to 8k/16k/32k as needed.
  • fastcgi_buffers controls the number and memory size of buffer segments used for the payload of the FastCGI response. Most, if not all of our tweaking will be around this setting and forms the remainder of this page.

Determine FastCGI response sizes

By grepping our access logs we can easily find out our maximum and average response sizes. These awk recipes have been lifted from here.

# maximum response size (bytes)
$ awk '($9 ~ /200/)' access.log  | awk '{print $10}' | sort -nr | head -n 1

# average response size (bytes)
$ echo $(( `awk '($9 ~ /200/)' access.log | awk '{print $10}' | awk '{s+=$1} END {print s}'` / `awk '($9 ~ /200/)' access.log  | wc -l` ))

Note these are going to report on all access requests with a 200 return code, you might want to split out just FastCGI requests into a seperate Nginx access log for reporting, like so (PHP-FPM here):

location ~ "\.php$" {
	fastcgi_index index.php;
	if (!-f $realpath_root$fastcgi_script_name) {
		return 404;
	}

	include /etc/nginx/fastcgi_params;
	fastcgi_pass unix:/run/php5/php-fpm.sock;
	access_log /var/log/nginx/phpfpmonly-access.log;
}

With these values in hand we are now better equipped to set fastcgi_buffers.

Setting buffer size

The fastcgi_buffers setting takes two values, buffer segment count and memory size, by default it will be:

fastcgi_buffers 8 4k|8k;

So a total of 8 buffer segments at either 4k/8k, which is determined by the platform memory page size. For Ubuntu Linux machines that turns out to be 4096 bytes (4K) - so a default buffer size of 32KB.

Based on the maximum/average response sizes determined above we can now raise/lower these values to suit. I usually keep the buffer size at default (memory page size) and adjust only the buffer segment count to a sane value for keep the bulk/all responses handled fully in buffer RAM. The default memory page size (in bytes) can be determined by the following command:

$ getconf PAGESIZE

If you response size average tips on the higher side you might want to alternatively lower the buffer segment count and raise the memory size in page size multiples (8k/16k/32k).

Checking results

We can see how often FastCGI responses are being saved to disk by grepping our Nginx error log(s):

$ cat error.log | grep "\[warn\]" | grep "buffered"

# will return lines like:
YYYY/MM/DD HH:MM:SS [warn] 1234#0: *123456 an upstream response is buffered to a temporary file...

Remember its not a bad thing to have some responses buffered to disk, but about finding the balance where only a small portion of your largest responses are handled this way.

The alternative of ramping up fastcgi_buffers to silly amounts is something I would strongly recommend against.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.