Skip to content

Instantly share code, notes, and snippets.

@denji
Last active March 19, 2024 05:22
Star You must be signed in to star a gist
Save denji/8359866 to your computer and use it in GitHub Desktop.
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.

First, you will need to install nginx

yum install nginx
apt install nginx

Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at /etc/nginx/nginx.conf with your favorite editor.

# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# only log critical errors
error_log /var/log/nginx/error.log crit;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}

http {
    # cache informations about FDs, frequently accessed files
    # can boost performance, but you need to test those values
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # to boost I/O on HDD we can disable access logs
    access_log off;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece, it is better than sending them one by one
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    tcp_nodelay on;

    # reduce the data that needs to be sent over network -- for testing environment
    gzip on;
    # gzip_static on;
    gzip_min_length 10240;
    gzip_comp_level 1;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types
        # text/html is always compressed by HttpGzipModule
        text/css
        text/javascript
        text/xml
        text/plain
        text/x-component
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        image/svg+xml;

    # allow the server to close connection on non responding client, this will free up memory
    reset_timedout_connection on;

    # request timed out -- default 60
    client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    send_timeout 2;

    # server will close connection after this time -- default 75
    keepalive_timeout 30;

    # number of requests client can make over keep-alive -- for testing environment
    keepalive_requests 100000;
}

Now you can save config and run bottom command

nginx -s reload
/etc/init.d/nginx start|restart

If you wish to test config first you can run

nginx -t
/etc/init.d/nginx configtest

Just For Security Reason

server_tokens off;

NGINX Simple DDoS Defense

This is far away from secure DDoS defense but can slow down some small DDoS. Those configs are also in test environment and you should do your values.

# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

# zone which we want to limit by upper values, we want limit whole server
server {
    limit_conn conn_limit_per_ip 10;
    limit_req zone=req_limit_per_ip burst=10 nodelay;
}

# if the request body size is more than the buffer size, then the entire (or partial)
# request body is written into a temporary file
client_body_buffer_size  128k;

# buffer size for reading client request header -- for testing environment
client_header_buffer_size 3m;

# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;

# read timeout for the request body from client -- for testing environment
client_body_timeout   3m;

# how long to wait for the client to send a request header -- for testing environment
client_header_timeout 3m;

Now you can do again test config

nginx -t # /etc/init.d/nginx configtest

And then reload or restart your nginx

nginx -s reload
/etc/init.d/nginx reload|restart

You can test this configuration with tsung and when you are satisfied with result you can hit Ctrl+C because it can run for hours.

Increase The Maximum Number Of Open Files (nofile limit) – Linux

Two ways to raise the nofile/max open files/file descriptors/file handles limit for NGINX in RHEL/CentOS 7+. With NGINX running, checking current limit on master process

$ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files
Max open files            1024                 4096                 files

And worker processes

ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files

Max open files            1024                 4096                 files
Max open files            1024                 4096                 files

Trying with the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf fails as SELinux policy doesn't allow setrlimit. This is shown in /var/log/nginx/error.log

015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied)

And in /var/log/audit/audit.log

type=AVC msg=audit(1437731200.211:366): avc:  denied  { setrlimit } for  pid=12066 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process

nolimit without Systemd

# /etc/security/limits.conf
# /etc/default/nginx (ULIMIT)
$ nano /etc/security/limits.d/nginx.conf
nginx   soft    nofile  65536
nginx   hard    nofile  65536
$ sysctl -p

nolimit with Systemd

$ mkdir -p /etc/systemd/system/nginx.service.d
$ nano /etc/systemd/system/nginx.service.d/nginx.conf
[Service]
LimitNOFILE=30000
$ systemctl daemon-reload
$ systemctl restart nginx.service

SELinux boolean httpd_setrlimit to true(1)

This will set fd limits for the worker processes. Leave the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf and run the following as root

setsebool -P httpd_setrlimit 1

By default max_ranges is not limited. DoS attacks can many Range-Requests (Impact on stability I/O).

Socket Sharding in NGINX 1.9.1+ (DragonFly BSD and Linux 3.9+)

Socket type Latency (ms) Latency stdev (ms) CPU Load
Default 15.65 26.59 0.3
accept_mutex off 15.59 26.48 10
reuseport 12.35 3.15 0.3

Thread Pools in NGINX Boost Performance 9x! (Linux)

Multi-threaded sending of files is currently supported only Linux. Without sendfile_max_chunk limit, one fast connection may seize the worker process entirely.

Selecting an upstream based on SSL protocol version

map $ssl_preread_protocol $upstream {
    ""        ssh.example.com:22;
    "TLSv1.2" new.example.com:443;
    default   tls.example.com:443;
}

# ssh and https on the same port
server {
    listen      192.168.0.1:443;
    proxy_pass  $upstream;
    ssl_preread on;
}

Happy Hacking!

Reference links

Static analyzers

Syntax highlighting

NGINX config formatter

NGINX configuration tools

BBR (Linux 4.9+)

modprobe tcp_bbr && echo 'tcp_bbr' >> /etc/modules-load.d/bbr.conf
echo 'net.ipv4.tcp_congestion_control=bbr' >> /etc/sysctl.d/99-bbr.conf
# Recommended for production, but with  Linux v4.13rc1+ can be used not only in FQ (`q_disc') in BBR mode.
echo 'net.core.default_qdisc=fq' >> /etc/sysctl.d/99-bbr.conf
sysctl --system
@Sarfroz
Copy link

Sarfroz commented Aug 7, 2017

I am using the server for remote downloading files. So, is there any best configuration for such. Since a lot provider uses for this purpose and they are using it without any issue.

@fabriziosalmi
Copy link

keepalive_requests 100000; is not a good setting for production enviroments.
Nginx default value (100) is far better.
In addition, if your website is behind Cloudflare services (free plan) you can safely use timeouts =< 100s since Cloudflare wait 100 seconds for a HTTP response from your server and trigger a 524 timeout error on higher response times.

@dimpiax
Copy link

dimpiax commented Nov 1, 2017

Thanks for information!

@denji
Copy link
Author

denji commented Nov 3, 2017

keepalive_requests 100000; is not a good setting for production enviroments.

Comment: -- for testing environment

@SBNBON005
Copy link

Nice 👍

@dertin
Copy link

dertin commented Apr 18, 2018

What do you think of this configuration for a t2.micro instance ?
https://github.com/dertin/lemp-stack-debian/tree/develop/files

I would like to have something that configures these files automatically, simply by entering the type of server resources and the concurrency level of the requests, low, normal, high.

@amitkDev
Copy link

amitkDev commented May 20, 2018

Nice article. One question on DDoS Defence: "This is far away from secure DDoS defense but can slow down some small DDoS". Does it mean that Nginx rate limiting may not be able to stop DDoS attack with very high input load but is decent enough to handle sudden spikes and load which is slightly higher than configured rate limit. In my test I see that it works decent enough with a certain input load but with higher load more than expected requests get processed. Same test on a more powerful machine works fine.

@aslijiasheng
Copy link

Nice

@hasmukhrathod
Copy link

Helpful. Thank you for sharing the information.

@hkanizawa
Copy link

Nice one!

@francoism90
Copy link

For systemd:
nano /etc/security/limits.d/nginx.conf -> nano /etc/systemd/system/nginx.service.d/nofile.conf

@francoism90
Copy link

@andreasvirkus
Copy link

Could update to include SSL and HTTP/2 optimisations:
https://haydenjames.io/nginx-tuning-tips-tls-ssl-https-ttfb-latency/

@denji
Copy link
Author

denji commented Jan 7, 2019

@andreasvirkus Already included in the list for a long time https://github.com/denji/nginx-tuning#reference-links

Copy link

ghost commented Apr 13, 2019

Please add mime type in http directive.

include mime.types;
default_type application/octet-stream;

@jeremy-gao
Copy link

nice,very useful!

@juanrdlo
Copy link

Thanks, I help a lot to improve the performance, it does not indicate that it should be, but it helps the server to have a better performance.

Happy friends code

@mgutz
Copy link

mgutz commented May 22, 2019

@Cyraus Was about to suggest the same.

Why aren't there thumbs up in gists?

@jessuppi
Copy link

Most of the initial settings are fantastic recommendations, but I'm not sure what "not for production" means on this document. In any case many of the buffers and timeout settings here are a very bad idea for production servers...

@juanrdlo
Copy link

Most of the initial settings are fantastic recommendations, but I'm not sure what "not for production" means on this document. In any case many of the buffers and timeout settings here are a very bad idea for production servers...

Mucha gente agarra esto como todo literal, y si eres un buen DevOps debes entender que esto es pruebas tras prueba dependiendo de tu proyecto para llevarte a una mejor solucion. Mucha gente no comprende eso, pero bueno. es mi humilde opinion

@jaz660
Copy link

jaz660 commented Jun 11, 2019

Niche tutorial... thank u so much....

@slidenerd
Copy link

Starred, just a quick pointer, the simple DDOS defense uses binary addr which works if you are not using X Forwarded For , check this answer out https://serverfault.com/questions/487463/nginx-rate-limiting-with-x-forwarded-for-header and kindly add a version of DDOS defense when you do have X-Forwarded-For enabled

@guptarohit
Copy link

I compiled few optimization hacks to increase requests/second.
Optimizations: Tuning Nginx for better RPS of an HTTP API
☮️ 🍰 ✨

@sINusBob
Copy link

Nice tutorial, but please add:
include mime.types;
default_type application/octet-stream;

The absence of these instructions generates several errors like:
The stylesheet http://domain.local/assets/css/style.css was not loaded because its MIME type, “text/plain”, is not “text/css”

@nuryadwi
Copy link

thanks dude, nice article

@ebuildy
Copy link

ebuildy commented Feb 15, 2021

gzip will switch sendfile off ! Because gzip is a content filter, and nginx will do the compression for every ... requests! not good at all, never use content filter if you care about performance.

@davidhcefx
Copy link

davidhcefx commented Mar 2, 2021

# max clients is also limited by the number of socket connections available on the system (~64k)

What does that mean? I thought there can be as many sockets as possible, as long as each TCP tuple is unique?

@tcpdump-examples
Copy link

We can get more info about tcp socket from here. Understanding TCP Socket With Examples

@sandikodev
Copy link

nice

@tetthys
Copy link

tetthys commented Dec 1, 2022

This is a great starting point

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment