Skip to content

Instantly share code, notes, and snippets.

View ngunyimacharia's full-sized avatar
🎯
Focusing

Kelvin Macharia Ngunyi ngunyimacharia

🎯
Focusing
  • Kirschbaum Development Group, LLC
  • Nairobi, Kenya
  • Twitter @ngunyimacharia
View GitHub Profile
@ngunyimacharia
ngunyimacharia / cloudSettings
Created October 5, 2019 07:38
Visual Studio Code Settings Sync Gist
View cloudSettings
{"lastUpload":"2019-10-05T07:38:08.240Z","extensionVersion":"v3.4.3"}
@ngunyimacharia
ngunyimacharia / post-mortem.md
Created September 9, 2019 19:30 — forked from joewiz/post-mortem.md
Recovery from nginx "Too many open files" error on Amazon AWS Linux
View post-mortem.md

On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:

2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files) 2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...

An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.

    • Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits (where pid is the process id retrieved from ps). (Note: sudo may be necessary on your system for the cat command here, depending on your system.)
  1. Added fs.file-max = 70000 to /etc/sysctl.conf
  2. Added `nginx soft nofile 1