Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Recovery from nginx "Too many open files" error on Amazon AWS Linux

On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:

2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files) 2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...

An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.

    • Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits (where pid is the process id retrieved from ps). (Note: sudo may be necessary on your system for the cat command here, depending on your system.)
  1. Added fs.file-max = 70000 to /etc/sysctl.conf
  2. Added nginx soft nofile 10000 and nginx hard nofile 30000 to /etc/security/limits.conf
  3. Ran sysctl -p
  4. Added worker_rlimit_nofile 30000; to /etc/nginx/nginx.conf.
    • While the directions suggested that nginx -s reload was enough to get nginx to recognize the new settings, not all of nginx's processes received the new setting. Upon closer inspection of /proc/pid/limits (see #1 above), the first worker process still had the original S1024/H4096 limit on file handles. Even nginx -s quit didn't shut nginx down. The solution was to kill nginx with the kill pid. After restarting nginx, all of the nginx-user owned processes had the new file limit of S10000/H30000 handles.
@mef

This comment has been minimized.

Copy link

commented Aug 6, 2016

Very handy, thanks

@ivan24

This comment has been minimized.

Copy link

commented Oct 14, 2016

+1

@vpxavier

This comment has been minimized.

Copy link

commented Nov 8, 2016

Solved my issue thanks !

@ranjeetranjan

This comment has been minimized.

Copy link

commented Dec 2, 2016

Solved my issue thanks !

@nickjwebb

This comment has been minimized.

Copy link

commented Dec 5, 2016

Nice. One quick update, though, on my 2016.09 ALAMI (m3.medium) fs.file-max is set to 382547 out of the box, so I skipped step 2.

@marcoceccarellispotsoftware

This comment has been minimized.

Copy link

commented Feb 23, 2017

+1 thank so much

@jlapier

This comment has been minimized.

Copy link

commented Mar 30, 2017

After making these changes, I was getting 1024 worker_connections are not enough
So I also increased worker_connections (which is limited by worker_rlimit_nofile). See: http://nginx.org/en/docs/ngx_core_module.html#worker_connections

@christrotter

This comment has been minimized.

Copy link

commented Apr 7, 2017

Also saved my bottom. Much thanks.

@heroandtn3

This comment has been minimized.

Copy link

commented May 3, 2017

@jsmrcaga

This comment has been minimized.

Copy link

commented Nov 1, 2017

Awesome! Thanks 💥

@srimaln91

This comment has been minimized.

Copy link

commented Dec 6, 2017

Thanks a lot.

@maderluc

This comment has been minimized.

Copy link

commented Jan 9, 2018

thanks man

@ramezanpour

This comment has been minimized.

Copy link

commented Mar 12, 2018

Thank you. My problem seems to be solved.

@omarelsayed1992

This comment has been minimized.

Copy link

commented May 7, 2018

Great ! Thanks 👍

@nixorn

This comment has been minimized.

Copy link

commented Jun 26, 2018

Thanks!

@ray-moncada

This comment has been minimized.

Copy link

commented Jul 6, 2018

Thank you, will implement. This adds load to the server. Has anyone experienced the CPU or Memory working harder?

@ray-moncada

This comment has been minimized.

Copy link

commented Jul 6, 2018

I followed the instructions but I am getting 5 NGINX processes now. prior to the changes I only had 4 nginx processes. Why is there a 5 process labeled "Master", which was not there before?

One of the processes is labeled as "master" something that was not there before. The other 4 processes are worker processes. The workers processes have the right max Hn and Sn configuration. But the master does not.

@atmosx

This comment has been minimized.

Copy link

commented Jul 10, 2018

I followed the instructions but I am getting 5 NGINX processes now. prior to the changes I only had 4 nginx processes. Why is there a 5 process labeled "Master", which was not there before?

I'm guessing you have 4 CPUs and workers are set to auto which means one per CPU. NGINX uses a master/worker model to achieve high speed throughput (async model). What you're seeing is perfectly normal.

From their website:

NGINX uses a predictable process model that is tuned to the available hardware resources:

  • The master process performs the privileged operations such as reading configuration and binding to ports, and then creates a small number of child processes (the next three types).
  • The cache loader process runs at startup to load the disk‑based cache into memory, and then exits. It is scheduled conservatively, so its resource demands are low.
  • The cache manager process runs periodically and prunes entries from the disk caches to keep them within the configured sizes.
  • The worker processes do all of the work! They handle network connections, read and write content to disk, and communicate with upstream servers.

read more at https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/

@eojones

This comment has been minimized.

Copy link

commented Jul 14, 2018

Worked for me, thank you so very much!!!!! You're awesome <3

@ryancheung

This comment has been minimized.

Copy link

commented Aug 5, 2018

tks!

@mskian

This comment has been minimized.

Copy link

commented Dec 19, 2018

Thanks a lot, it works 💯
Awesome...!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.