Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
failed (104: Connection reset by peer) while reading response header from upstream, client:
failed (104: Connection reset by peer) while reading response header from upstream, client:
If you are getting the above error in nginx logs running in from of upstream servers you may consider doing this as it worked for me:
check the ulimit on the machines and ensure it is high enough to handle the load coming in. 'ulimit' on linux, I am told determines the maximum number of open files the kernel can handle.
The way I did that?
modifying limits: for open files:
add or change this line in /etc/systcl.conf
fs.file-max = <limit-number>
set soft & hard limits in this file for various users: /etc/security/limits.conf
<user-name or group-name or *> soft nofile <limit-number>
<user-name or group-name or *> soft nofile <limit-number> hard nofile <limit-number>
Then reload the system settings withou restarting:
sysctl -p
nginx settings :
in nginx.conf add/set:
worker_rlimit_nofile <limit-number>
When setting up upstream servers or not, it is a good idea to keep some connections alive when running on HTTP 1.1. Consider adding this to the upstream block on :
keepalive <some-number>;
Copy link

GwynethLlewelyn commented Jun 5, 2016

If the same message appears occasionally, but not always — say, 20% of the time, which is annoying enough but not easily reproducible because it doesn't happen all the time... — then there are three further possibilities:

  1. A programming error is segfaulting php-fpm, which in turn means that the connection with nginx will be severed. This will usually leave at least some logs around and/or core dumps, which can be analysed further.
  2. For some reason, PHP is not being able to write a session file (usually: session.save_path = "/var/lib/php/sessions"). This can be bad permissions, bad ownership, bad user/group, or more esoteric/obscure issues like running out of inodes on that directory (or even a full disk!). This will usually not leave many core dumps around and possibly not even anything on the PHP error logs.
  3. Even more tricky to debug: an extension is misbehaving (occasionally hitting some kind of inner limit, or a bug which is not triggered all the time), segfaulting, and bringing the php-fpm process down with it — thus closing the connection with nginx. The usual culprits are APC, memcache/d, etc. (in my case it was the New Relic extension), so the idea here is to turn each extension off until the error disappears.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment