Sample Data
[
{ id: 110140,
last_visit: '2016-07-22T07:25:36Z',
first_visit: '2016-06-23T04:01:54Z'},
{ id: 110427,
last_visit: '2016-07-25T12:35:26Z',
first_visit: '2016-06-23T04:02:02Z'},Sample Data
[
{ id: 110140,
last_visit: '2016-07-22T07:25:36Z',
first_visit: '2016-06-23T04:01:54Z'},
{ id: 110427,
last_visit: '2016-07-25T12:35:26Z',
first_visit: '2016-06-23T04:02:02Z'},| import * as models from "models"; | |
| import Sequelize from "sequelize"; | |
| import fs from "fs"; | |
| delete models.default; | |
| const sequelize = new Sequelize( | |
| '', | |
| '', | |
| '', { |
I hereby claim:
To claim this, I am signing this object:
On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:
2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files) 2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...
An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.
su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits (where pid is the process id retrieved from ps). (Note: sudo may be necessary on your system for the cat command here, depending on your system.)fs.file-max = 70000 to /etc/sysctl.conf