Skip to content

Instantly share code, notes, and snippets.

@v0lkan
Last active April 2, 2024 18:25
Show Gist options
  • Save v0lkan/90fcb83c86918732b894 to your computer and use it in GitHub Desktop.
Save v0lkan/90fcb83c86918732b894 to your computer and use it in GitHub Desktop.
Configuring NGINX for Maximum Throughput Under High Concurrency
user web;
# One worker process per CPU core.
worker_processes 8;
# Also set
# /etc/security/limits.conf
# web soft nofile 65535
# web hard nofile 65535
# /etc/default/nginx
# ULIMIT="-n 65535"
worker_rlimit_nofile 65535;
pid /run/nginx.pid;
events {
#
# Determines how many clients will be served by each worker process.
# (Max clients = worker_connections * worker_processes)
# Should be equal to `ulimit -n / worker_processes`
#
worker_connections 65535;
#
# Let each process accept multiple connections.
# Accept as many connections as possible, after nginx gets notification
# about a new connection.
# May flood worker_connections, if that option is set too low.
#
multi_accept on;
#
# Preferred connection method for newer linux versions.
# Essential for linux, optmized to serve many clients with each thread.
#
use epoll;
}
http {
##
# Basic Settings
##
#
# Override some buffer limitations, will prevent DDOS too.
#
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
#
# Timeouts
# The client_body_timeout and client_header_timeout directives are
# responsible for the time a server will wait for a client body or
# client header to be sent after request. If neither a body or header
# is sent, the server will issue a 408 error or Request time out.
#
# The keepalive_timeout assigns the timeout for keep-alive connections
# with the client. Simply put, Nginx will close connections with the
# client after this period of time.
#
# Finally, the send_timeout is a timeout for transmitting a response
# to the client. If the client does not receive anything within this
# time, then the connection will be closed.
#
#
# send the client a "request timed out" if the body is not loaded
# by this time. Default 60.
#
client_body_timeout 32;
client_header_timeout 32;
#
# Every 60 seconds server broadcasts Sync packets, so 90 is
# a conservative upper bound.
#
keepalive_timeout 90; # default 65
send_timeout 120; # default 60
#
# Allow the server to close the connection after a client stops
# responding.
# Frees up socket-associated memory.
#
reset_timedout_connection on;
#
# Open file descriptors.
# Caches information about open FDs, freqently accessed files.
#
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
#
# Sendfile copies data between one FD and other from within the kernel.
# More efficient than read() + write(), since the requires transferring
# data to and from the user space.
#
sendfile on;
# Tcp_nopush causes nginx to attempt to send its HTTP response head in one
# packet, instead of using partial frames. This is useful for prepending
# headers before calling sendfile, or for throughput optimization.
tcp_nopush on;
#
# don't buffer data-sends (disable Nagle algorithm). Good for sending
# frequent small bursts of data in real time.
#
tcp_nodelay on;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
#
# Use analytics to track stuff instead of using precious file IO resources.
# Disabling logging speeds up IO.
#
access_log off;
error_log /root/PROJECTS/logs/error.log crit;
##
# Gzip Settings
##
gzip on;
gzip_disable "MSIE [1-6]\.";
# Only allow proxy request with these headers to be gzipped.
gzip_proxied expired no-cache no-store private auth;
# Default is 6 (1<n<9), but 2 -- even 1 -- is enough. The higher it is, the
# more CPU cycles will be wasted.
gzip_comp_level 9;
gzip_min_length 500; # Default 20
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
}
@zyf0330
Copy link

zyf0330 commented Mar 1, 2018

My three servers use nginx to serve HTTP/1.0 connections, and I set worker_connections to 32768. My app process is killed causing by memory usage too high. Then I change it to 10240, no problem happens anymore. My servers have 2 cpu cores and 4G memory.
Who knows why in this situation? Now I just know that worker_connections cannot be set too high always.

@ashishdungdung
Copy link

Thanks for this please keep this updated over time.

By the way do you have any suggestions for a server with 16 cores and 32GB RAM ?

@gdewey
Copy link

gdewey commented Jun 6, 2018

tks for sharing

@Spholt
Copy link

Spholt commented Jul 2, 2018

Thanks for sharing this, great comments

@v0lkan
Copy link
Author

v0lkan commented Dec 1, 2018

Here is a variant of it that I used to reverse proxy an HTTPS service

# One worker process per vCPU.
worker_processes  4;

events {
    #
    # Determines how many clients will be served by each worker process.
    # (Max clients = worker_connections * worker_processes)
    # Should be equal to `ulimit -n`
    #
    worker_connections 1024;

    #
    # Let each process accept multiple connections.
    # Accept as many connections as possible, after nginx gets notification
    # about a new connection.
    # May flood worker_connections, if that option is set too low.
    #
    multi_accept on;

    #
    # Preferred connection method for newer linux versions.
    # Essential for linux, optmized to serve many clients with each thread.
    #
    # Didn’t woork on Mac. — try on prod to see if it works.
    # use epoll;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    # error_log /Users/volkan/Desktop/error.log;
    # access_log /Users/volkan/Desktop/access.log;

    error_log off;
    access_log off;

    #
    # Override some buffer limitations, will prevent DDOS too.
    #
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;

    # M$IE closes keepalive connections in 60secs anyway.
    # 90sec is a conservative upper bound.
    # This number should ideally be the MAX number of
    # seconds everything is donwlaoded.
    # So if "time to interactive" on a web app is 20secs,
    # Choosing 40secs (to be conservative) is a good ballpark
    # guesstimate.
    keepalive_timeout  90;

    # To be conservative: Default is 60.
    send_timeout 120;

    #gzip  on;


    #
    # Allow the server to close the connection after a client stops
    # responding.
    # Frees up socket-associated memory.
    #
    reset_timedout_connection on;

    #
    # don't buffer data-sends (disable Nagle algorithm). Good for sending
    # frequent small bursts of data in real time.
    #
    tcp_nodelay on;

    types_hash_max_size 2048;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            # this also works:
            # proxy_pass https://--REDACTED-API-SERVER--$request_uri;
            # this works:
            proxy_pass https://--REDACTED-API-SERVER--;
            proxy_http_version 1.1;

            # these are optional:
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;

        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    include servers/*;
}

@v0lkan
Copy link
Author

v0lkan commented Dec 4, 2018

@scrnjakovic a couple of years (!) late to respond.

All my github notifications are filtered, so I tend to miss them unless someone mentions that something needs to taken care of :)

Certain VPSs will disable you modifying the ulimit settings (no matter if you are root or not) for security reasons.

If that’s the case: One solution is to use clustering and load balancing your traffic to more than one server/container/box .

@jstowey
Copy link

jstowey commented Dec 13, 2018

@v0lkan Thanks for this, is it safe to set the open file limit to the same as the output from ulimit -Hn? Or is it recommended to set it a bit lower to avoid potential issues?

My ulimit -n also always returns 1024

@v0lkan
Copy link
Author

v0lkan commented Jan 2, 2019

@jstowey For some reason I don’t get gist notifications in my inbox, or they are filtered out as “social” I dunno :)

Anyways, you can configure your system (depending on what your system is) to change your ulimit -n outcome. You can google “how to increase ulimit“

Also if you have 1.5hours, I have a talk about it (scaling your Node.js App like A Boss)
https://www.youtube.com/watch?v=Ogjb60Fg10A (part one)
https://www.youtube.com/watch?v=f5phsX4VUOU (part two)

If after updating your environment, ulimit still stays the same, then it’s probably managed and fixed by your hosting provider (AWS typically does that)

In that case I’d suggest you to horizontally scale (instead of a big fat machine that has a big fat ulimit, have smaller minions with ulimit is fixed at 1024)

It should be "in general" safe to set it at ulimit -n, it is the maximum number of handles NGINX will try to open, it does not mean it will necessarily exhaust all your file handles. — but to be safe, feel free to keep it a few 100s below the max if you want.

Also my assumption is there is no other stuff that consumes excessive file handles (like a database or redis or some CDN file service etc)

That said, your system might behave differently: so make sure to test and monitor first.

Hope that helps.

@electropolis
Copy link

electropolis commented Feb 8, 2019

You made small mistake in line 20. Max clients is for worker_rlimit not for worker_connections so ulimit -n should be equal for worker_rlimit and worker_connections should be one of the multiplier value. So consider also change line 23 value 65535 to: 65535/8 = 8192

@rkt2spc
Copy link

rkt2spc commented Mar 1, 2019

http://nginx.org/en/docs/ngx_core_module.html#worker_connections
worker_connections is per each worker. So you should set it to ulimit -n / worker_processes

@v0lkan
Copy link
Author

v0lkan commented Oct 12, 2019

rocketspacer yup, you’re right. updating the doc.

@v0lkan
Copy link
Author

v0lkan commented Oct 12, 2019

Like I’ve told in many of my public talks:
Do not copy and paste this configuration as it might cause more harm than good.

NGINX default settings is good enough for many scenarios.

If you are updating a value, make sure you read and understand the official documentation.
Also, make sure you profile and monitor your system before and after you make the change to understand what kind of impact your change had on your infrastructure.

@metaphor
Copy link

hi @v0lkan,

could you please share concrete example on "disable access log and use analytics for saving IO", like what analytics tool? thanks in advance.

@andretefras
Copy link

andretefras commented Mar 23, 2020

Here is a variant of it that I used to reverse proxy an HTTPS service

# One worker process per vCPU.
worker_processes  4;

events {
    #
    # Determines how many clients will be served by each worker process.
    # (Max clients = worker_connections * worker_processes)
    # Should be equal to `ulimit -n`
    #
    worker_connections 1024;

    #
    # Let each process accept multiple connections.
    # Accept as many connections as possible, after nginx gets notification
    # about a new connection.
    # May flood worker_connections, if that option is set too low.
    #
    multi_accept on;

    #
    # Preferred connection method for newer linux versions.
    # Essential for linux, optmized to serve many clients with each thread.
    #
    # Didn’t woork on Mac. — try on prod to see if it works.
    # use epoll;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    # error_log /Users/volkan/Desktop/error.log;
    # access_log /Users/volkan/Desktop/access.log;

    error_log off;
    access_log off;

    #
    # Override some buffer limitations, will prevent DDOS too.
    #
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;

    # M$IE closes keepalive connections in 60secs anyway.
    # 90sec is a conservative upper bound.
    # This number should ideally be the MAX number of
    # seconds everything is donwlaoded.
    # So if "time to interactive" on a web app is 20secs,
    # Choosing 40secs (to be conservative) is a good ballpark
    # guesstimate.
    keepalive_timeout  90;

    # To be conservative: Default is 60.
    send_timeout 120;

    #gzip  on;


    #
    # Allow the server to close the connection after a client stops
    # responding.
    # Frees up socket-associated memory.
    #
    reset_timedout_connection on;

    #
    # don't buffer data-sends (disable Nagle algorithm). Good for sending
    # frequent small bursts of data in real time.
    #
    tcp_nodelay on;

    types_hash_max_size 2048;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            # this also works:
            # proxy_pass https://--REDACTED-API-SERVER--$request_uri;
            # this works:
            proxy_pass https://--REDACTED-API-SERVER--;
            proxy_http_version 1.1;

            # these are optional:
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;

        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    include servers/*;
}

@v0lkan Greets! Thanks for sharing.

Do it make sense for a LB config or the first option is more suitable?

@arnav13081994
Copy link

Hi @v0lkan,

Thank you for such good documentation.

Your comments regarding timeouts are in seconds but your configuration is setting them in ms.
I'm not sure what were you going for. I personally have used the following timeouts (in ms) and it seems to work so far. I think Nginx keeping a connection alive for 60 seconds is a bit much, which is why I'm only keeping it alive for 15ms. As I understand, it's the time taken to first receive any data anyway. This could perhaps be increased to maybe 1-5 seconds but anything more than that seems unnecessary. Am I right or have I missed something very important?

Thank you for your help.

# Buffer size for POST submissions
 client_body_buffer_size 10K;
 client_max_body_size 16m;

 # Buffer size for Headers
 client_header_buffer_size 1k;

 # Max time to receive client headers/body
 client_body_timeout 12;
 client_header_timeout 12;

 # Max time to keep a connection open for
 keepalive_timeout 15;

 # Max time for the client accept/receive a response
 send_timeout 10;

@rajibdas909
Copy link

Hi @v0lkan & team Can you please help me to come up with the configuration we run on the server in the Kubernetes cluster with request and limit? Most important is whether we should assign the worker process as auto

@v0lkan
Copy link
Author

v0lkan commented Dec 27, 2020

@rajibdas909
Copy link

Let me explain more why I asked the above question -

In Kubernetes, you can pass the CPU request to the container e.g. we decided to pass the request as 1000m which is 1 CPU whereas the nodes where we have created the cluster are 8 core 16 GB memory. In such a case what should we set for worker_processes ? Has anyone done any configuration to optimize resource utilization as such? Thanks, Rajib

@terrylinooo
Copy link

terrylinooo commented Jan 20, 2021

@arnav13081994

keepalive_timeout 15;
send_timeout 10;

You are right. I lower the keepalive_timeout to 15 than the CPU usage is dropped. Almost 100% when I set it 150 seconds.

@Altroo
Copy link

Altroo commented Apr 14, 2021

Tried this on my nginx for django, but i keep getting this weird error when i access admin panel returns 500 bad request with an error : too many open files.

@v0lkan
Copy link
Author

v0lkan commented Sep 3, 2022

@Altroo some cloud vendors might not allow you modify file handler limits; that might be the issue.

@jjxtra
Copy link

jjxtra commented Feb 11, 2023

Nobody talks about this, but you MUST increase the file max descriptors per process, as for very historical reasons, Linux limits to 1024 even in modern distros...

sudo nano /etc/security/limits.conf
# add:
* - nofile 65536
root - nofile 65536

sudo nano /etc/systemd/system.conf
# add:
DefaultLimitNOFILE=65536

sudo nano /etc/sysctl.conf
# add:
fs.file-max = 65536

nano /etc/default/nginx
# add:
ULIMIT="-n 65536"

Then reboot.

@fapdash
Copy link

fapdash commented Apr 3, 2023

@v0lkan @jjxtra Why are you setting the soft limit on open files equal to the hard limit on open files? Will the worker processes not up their limit by themselves until they hit the hard limit?

Edit:

https://www.freedesktop.org/software/systemd/man/systemd.exec.html says regarding LimitNOFILE:

Nowadays, the hard limit defaults to 524288, a very high value compared to historical defaults. Typically applications should increase their soft limit to the hard limit on their own, if they are OK with working with file descriptors above 1023, i.e. do not use select(2).

But I'm still not sure if that means that nginx should do that on it's own or if we have to set that explicitly via worker_rlimit_nofile. From my tests it seems that worker_rlimit_nofile sets both the soft limit and the hard limit to that value.


For what it's worth on a Ubuntu Server 22.04 machine the values I get are:

$ cat /proc/sys/fs/file-max
9223372036854775807
$ ulimit -Hn
1048576

@fapdash
Copy link

fapdash commented Apr 3, 2023

And I've read online a couple of times now that worker_rlimit_nofile should be double the worker_connections if you are using nginx as an reverse proxy because a revers proxy connection needs 2 file handles, is not not true?

@jjxtra
Copy link

jjxtra commented Apr 3, 2023

I ended up using 1000000 for most of these values and it was fine. For nginx you can then set the worker limit to 2000000.

@v0lkan
Copy link
Author

v0lkan commented Apr 18, 2023

@v0lkan @jjxtra Why are you setting the soft limit on open files equal to the hard limit on open files?

That is an arbitrary choice.

You can make hard limit larger than the soft limit.

As long as they are large enough (and you have enough resources) things should work fine.

--

@ everyone else

I am typically terribly busy, and only have fraction of a time to look at and maintain this gist.

And it has been a long time since I last updated it; please cross-check this one with the latest NGINX documentation.

I don't think much have changed since then, since these are pretty low-level config; but still, the only time I will likely maintain this document will be when I need to scale NGINX to its limits again :) — which may be five or ten years from now; or maybe never :).

@v0lkan
Copy link
Author

v0lkan commented Apr 18, 2023

Nobody talks about this, but you MUST increase the file max descriptors per process, as for very historical reasons, Linux limits to 1024 even in modern distros...

Yes, I forgot to mention that. I assume that the reader would know about that.

In addition certain cloud vendors do not allow fiddling with the limits; so always verify that you have the limits as expected after you set them.

@hummingbird-12
Copy link

hummingbird-12 commented Jul 21, 2023

Hi @v0lkan,

Unlike for the access_log directive, error_log off; does not disable the error logs.
error_log off; would create a file named "off" for logging the errors within the default NGINX config files directory.

This is also mentioned in the NGINX blog post, in which it recommends specifying error_log /dev/null emerg; in cases where "disabling" error logging is preferred :)

Thanks! :)

@RaneemAlRushud
Copy link

Good day @arnav13081994,

Yes, you've.

"client_body_buffer_size 10K;"
This directive sets the buffer size for storing the client request body. In this case, it is set to 10 kilobytes (10k). It determines the maximum amount of data that Nginx can temporarily store from the client while the request is being processed.

Since the client_max_body_size directive is set to 16 megabytes (miga), you should choose a client_body_buffer_size value that accommodates this maximum size.

Considering that Nginx uses memory for other purposes as well, you can allocate a buffer size slightly larger than the expected maximum request body size.

@v0lkan
Copy link
Author

v0lkan commented Dec 20, 2023

Folks;

I haven’t been updating this for a while, and this file should be considered a starting point, not the end-all-be-all.

Every production setup and server is different; you’ll need to benchmark your performance, tweak things on the go, and read the official docs.

Nowadays, I’m dealing with pure NGINX less and cloud load balancers more;
however, if I see anything worthwhile to add here or any change that can improve in a system-agnostic way, I’ll update the file.

One other remark: This file's particular use was for a very high-throughput system with relatively smaller payloads. If you have larger payloads, your memory utilization will increase, and having too high of a file limit can indeed crash the process due to memory pressure.

So, yep, lots of moving parts, yet I tried to document the file as best as I can.

Cheers.

@Pvpsmash
Copy link

Hi, checking up, i run a pterodactyl.io hosting company (MCST.IO), and ive ran into some issues regarding gzip for plugin module's for minecraft, im unsure if the issue is gzip or php-fpm, but in general should gzip just be disabled from nginx.conf ? unsure as default conf does not match my other node server configs at default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment