Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Setting Nginx FastCGI response buffer sizes.

Nginx FastCGI response buffer sizes

By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.

This process is outlined at the Nginx ngx_http_fastcgi_module page manual page.

Introduction

Since disk is slow and memory is fast the aim is to get as many FastCGI responses passing only through memory. On the flip side we don't want to set an excessively large buffer as they are created and sized on a per request basis - it's not shared.

The related Nginx options are:

  • fastcgi_buffering first appeared in Nginx 1.5.6 (1.6.0 stable) and can be used to turn buffering completely on/off. It's on by default.

  • fastcgi_buffer_size is a special buffer space used to hold the first chunk of the FastCGI response, which is going to be the HTTP response headers.

    You shouldn't need to adjust this from the default - even if Nginx defaults to the smallest page size of 4KB (your platform will determine if 4/8k buffer) it should fit your typical HTTP header.

    The one exception I have seen are frameworks that push large amounts of cookie data via the Set-Cookie HTTP header during their user verification/login phase - blowing out the buffer and resulting in a HTTP 500 error. In those instances you will need to increase this buffer to 8k/16k/32k to fully accommodate your largest upstream HTTP header being pushed.

  • fastcgi_buffers controls the number and memory size of buffer segments used for the payload of the FastCGI response. Most, if not all of our tweaking will be around this setting and forms the remainder of this page.

Determine actual FastCGI response sizes

By grepping our Nginx access logs we can determine both maximum and average response sizes. The basis of this awk recipe was lifted from here:

$ awk '($9 ~ /200/) { i++;sum+=$10;max=$10>max?$10:max; } END { printf("Maximum: %d\nAverage: %d\n",max,i?sum/i:0); }' access.log

# Maximum: 76716
# Average: 10358

Note: these recipes are going to report on all access requests returning an HTTP 200 code, you might want to split out just FastCGI requests into a separate Nginx access log for reporting, like so (PHP-FPM here):

location ~ "\.php$" {
	fastcgi_index index.php;
	if (!-f $realpath_root$fastcgi_script_name) {
		return 404;
	}

	include /etc/nginx/fastcgi_params;
	fastcgi_pass unix:/run/php5/php-fpm.sock;

	# output just FastCGI requests to it's own Nginx log file
	access_log /var/log/nginx/phpfpmonly-access.log;
}

With these values in hand we are now much better equipped to set fastcgi_buffers.

Setting the buffer size

The fastcgi_buffers setting takes two values, buffer segment count and memory size, by default it will be:

fastcgi_buffers 8 4k|8k;

So a total of 8 buffer segments at either 4k/8k, which is determined by the platform memory page size. For Debian/Ubuntu Linux that turns out to be 4096 bytes (4K) - so a default total buffer size of 32KB.

Based on the maximum/average response sizes determined above we can now raise/lower these values to suit. I typically keep buffer size at the default (memory page size) and adjust only the buffer segment count to a value for keep the bulk/all responses handled fully in buffer RAM.

The default memory page size (in bytes) for an operating system can be determined by the following command:

$ getconf PAGESIZE

If you response size average tips on the higher side you might want to alternatively lower the buffer segment count and raise the memory size in page size multiples (8k/16k/32k).

Verifying results

We can see how often FastCGI responses are being saved to disk by grepping our Nginx error log(s):

$ cat error.log | grep --extended-regexp "\[warn\].+buffered"

# will return lines like:
YYYY/MM/DD HH:MM:SS [warn] 1234#0: *123456 an upstream response is buffered to a temporary file...

Remember its not necessarily a bad situation to have some larger responses buffered to disk - aim for a balance where only a small portion of your largest responses are handled in this way.

The alternative of ramping up fastcgi_buffers to excessive number and/or size values to fit all FastCGI responses purely in RAM is something I would strongly recommend against, as unless your Nginx server is only receiving a few concurrent requests at any one time - you risk exhausting your available system memory.

@JohnMaguire

This comment has been minimized.

Show comment Hide comment
@JohnMaguire

JohnMaguire May 28, 2015

FYI, anyone who may be finding this Gist via a Google search: We recently ran into an issue where we were streaming large amounts of data over a long time period, and were seeing nginx's processing ballooning with memory (like, 1.5GB of RAM after 5-10 minutes.) The client was receiving about a gig of data before the server OOM'd the process and everything fell apart.

Output buffering was off in PHP, our buffers were set to a total of ~4MB. No idea what was going on.

We upgraded from nginx 1.4.7 to 1.7.6 in order to attempt fastcgi_buffering off;. While this did work, we removed the fastcgi_buffering off; flag and our issue still hadn't returned.

In other words: There may be a memory leak in nginx 1.4.7 when sending large amounts of data from PHP-FPM, through nginx, to a client. If the memory hog is nginx and not your php-fpm process, try upgrading. If you figure out the real cause, tag me, I'm interested. :)

FYI, anyone who may be finding this Gist via a Google search: We recently ran into an issue where we were streaming large amounts of data over a long time period, and were seeing nginx's processing ballooning with memory (like, 1.5GB of RAM after 5-10 minutes.) The client was receiving about a gig of data before the server OOM'd the process and everything fell apart.

Output buffering was off in PHP, our buffers were set to a total of ~4MB. No idea what was going on.

We upgraded from nginx 1.4.7 to 1.7.6 in order to attempt fastcgi_buffering off;. While this did work, we removed the fastcgi_buffering off; flag and our issue still hadn't returned.

In other words: There may be a memory leak in nginx 1.4.7 when sending large amounts of data from PHP-FPM, through nginx, to a client. If the memory hog is nginx and not your php-fpm process, try upgrading. If you figure out the real cause, tag me, I'm interested. :)

@magnetikonline

This comment has been minimized.

Show comment Hide comment
@magnetikonline

magnetikonline Jun 15, 2015

Thanks for the update. Sure that information will be helpful for some!

Owner

magnetikonline commented Jun 15, 2015

Thanks for the update. Sure that information will be helpful for some!

@CMCDragonkai

This comment has been minimized.

Show comment Hide comment
@CMCDragonkai

CMCDragonkai Jun 17, 2015

Do you have any information on the busy_buffer_size?

Do you have any information on the busy_buffer_size?

@jeveloper

This comment has been minimized.

Show comment Hide comment
@jeveloper

jeveloper Mar 11, 2016

By the way, i thought i'd share this with you, Nginx 1.9.12 with PHP7 FPM , ubuntu 14 LTS, running ecom site , does produce this warning.

Anyone have a reasonable number (based on e.g. total ram of 1,5gb) per node that they use for buffering?

thanks

By the way, i thought i'd share this with you, Nginx 1.9.12 with PHP7 FPM , ubuntu 14 LTS, running ecom site , does produce this warning.

Anyone have a reasonable number (based on e.g. total ram of 1,5gb) per node that they use for buffering?

thanks

@GreenReaper

This comment has been minimized.

Show comment Hide comment
@GreenReaper

GreenReaper Mar 18, 2016

As magnetikonline says, it depends on how big your output is - gzipped, if you're using gzip (and you should be, for everything compressible; check the types it's applied to). Note that gzip has separate buffers, the ones mentioned here are for the output after gzip.

Use your browser's developer tools to see how big your various fastcgi output pages are likely to be, divide by page size (usually 4k), and round up to a power of two. Then apply settings and test.

As magnetikonline says, it depends on how big your output is - gzipped, if you're using gzip (and you should be, for everything compressible; check the types it's applied to). Note that gzip has separate buffers, the ones mentioned here are for the output after gzip.

Use your browser's developer tools to see how big your various fastcgi output pages are likely to be, divide by page size (usually 4k), and round up to a power of two. Then apply settings and test.

@schkovich

This comment has been minimized.

Show comment Hide comment
@schkovich

schkovich Sep 2, 2016

there is an extra quote here:

$ cat error.log | grep -E "\[warn\].+buffered""

it should read:

$ cat error.log | grep -E "\[warn\].+buffered"

there is an extra quote here:

$ cat error.log | grep -E "\[warn\].+buffered""

it should read:

$ cat error.log | grep -E "\[warn\].+buffered"
@magnetikonline

This comment has been minimized.

Show comment Hide comment
@magnetikonline

magnetikonline Sep 5, 2016

Awesome @schkovich - have fixed.

Owner

magnetikonline commented Sep 5, 2016

Awesome @schkovich - have fixed.

@miken32

This comment has been minimized.

Show comment Hide comment
@miken32

miken32 Oct 12, 2016

LOL those awk commands are somebody playing a bad joke on you. Try this for maximum and average response sizes for PHP requests:

awk '($9 ~ /200/ && $7 ~ /\.php/) {i++; sum+=$10; max=$10>max?$10:max;} END {printf("MAX: %d\nAVG: %d\n", max, i?sum/i:0);}' /var/log/nginx/access.log

Also, there's an extra cat here:

cat error.log | grep -E "\[warn\].+buffered"

it should read:

grep -E "\[warn\].+buffered" /var/log/nginx/error.log

miken32 commented Oct 12, 2016

LOL those awk commands are somebody playing a bad joke on you. Try this for maximum and average response sizes for PHP requests:

awk '($9 ~ /200/ && $7 ~ /\.php/) {i++; sum+=$10; max=$10>max?$10:max;} END {printf("MAX: %d\nAVG: %d\n", max, i?sum/i:0);}' /var/log/nginx/access.log

Also, there's an extra cat here:

cat error.log | grep -E "\[warn\].+buffered"

it should read:

grep -E "\[warn\].+buffered" /var/log/nginx/error.log
@larssn

This comment has been minimized.

Show comment Hide comment
@larssn

larssn Dec 20, 2016

Good stuff @miken32

larssn commented Dec 20, 2016

Good stuff @miken32

@magnetikonline

This comment has been minimized.

Show comment Hide comment
@magnetikonline

magnetikonline Dec 23, 2016

Thanks @miken32 - was not aware you could do ternary operators with awk. Have included your improvements.

Owner

magnetikonline commented Dec 23, 2016

Thanks @miken32 - was not aware you could do ternary operators with awk. Have included your improvements.

@fliespl

This comment has been minimized.

Show comment Hide comment
@fliespl

fliespl Mar 21, 2017

@miken32 thank for your alternative awk command. Unfortunately it won't work with frameworks like symfony2, which handle all requests with single php via internal nginx redirect (file extension is not saved in access.log)

fliespl commented Mar 21, 2017

@miken32 thank for your alternative awk command. Unfortunately it won't work with frameworks like symfony2, which handle all requests with single php via internal nginx redirect (file extension is not saved in access.log)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment