Skip to content

Instantly share code, notes, and snippets.

@prasetiyohadi
Created December 23, 2015 08:17
Show Gist options
  • Save prasetiyohadi/c24112871943aa21d1bc to your computer and use it in GitHub Desktop.
Save prasetiyohadi/c24112871943aa21d1bc to your computer and use it in GitHub Desktop.
Tuning Nginx for performance
# Source https://www.nginx.com/blog/tuning-nginx/
# A good rule to follow when tuning is to change one setting at a time, and set it back to the default value if the change does not improve performance
# Tuning your Linux configuration
#
# The backlog queue: settings relate to connections and how they are queued
# If you have a high rate of incoming connections and you are getting uneven levels of performance (for example some connections appear to be stalling), then changing these settings can help
#
# net.core.somaxconn – The maximum number of connections that can be queued for acceptance by Nginx
# Note: if you set this to a value greater than 512, change the backlog parameter to the Nginx listen directive to match
#
# net.core.netdev_max_backlog – The rate at which packets are buffered by the network card before being handed off to the CPU
# Increasing the value can improve performance on machines with a high amount of bandwidth
#
# File descriptors: operating system resources used to represent connections and open files, among other things
# sys.fs.file_max – The system-wide limit for file descriptors
# nofile – The user file descriptor limit, set in the /etc/security/limits.conf file
#
# Ephemeral ports: when Nginx is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral, port
# net.ipv4.ip_local_port_range – The start and end of the range of port values
# If you see that you are running out of ports, increase the range
# A common setting is ports 1024 to 65000.
#
# net.ipv4.tcp_fin_timeout – The time a port must be inactive before it can be reused for another connection
# The default is often 60 seconds, but it’s usually safe to reduce it to 30, or even 15, seconds
# Tuning your Nginx configuration
#
# Worker processes
# worker_processes – The number of NGINX worker processes (the default is 1)
# In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that
#
# worker_connections – The maximum number of connections that each worker process can handle simultaneously
# The default is 512, but most systems have enough resources to support a larger number
#
# Keepalive connections
# keepalive_requests – The number of requests a client can make over a single keepalive connection
# The default is 100, but a much higher value can be especially useful for testing with a load-generation tool, which generally sends a large number of requests from a single client
#
# keepalive_timeout – How long an idle keepalive connection remains open
#
# The following directive relates to upstream keepalives
# keepalive – The number of idle keepalive connections to an upstream server that remain open for each worker process
#
# To enable keepalive connections to upstream servers you must also include the following directives in the configuration:
proxy_http_version 1.1;
proxy_set_header Connection "";
# Access logging
# Logging every request consumes both CPU and I/O cycles, and one way to reduce the impact is to enable access-log buffering
# To enable access-log buffering, include the buffer=**size** parameter to the access_log directive, Nginx writes the buffer contents to the log when the buffer reaches the **size** value
# To have Nginx write the buffer after a specified amount of time, include the flush=time parameter
# To disable access logging completely, include the **off** parameter to the access_log directive
#
# Sendfile
# The operating system’s sendfile() system call copies data from one file descriptor to another, often achieving zero-copy, which can speed up TCP data transfers
# To enable Nginx to use it, include the **sendfile** directive in the **http** context or a **server** or **location** context
# Nginx can then write cached or on-disk content down a socket without any context switching to user space, making the write extremely fast and consuming fewer CPU cycles
# Note, however, that because data copied with sendfile() bypasses user space, it is not subject to the regular NGINX processing chain and filters that change content, such as **gzip**
# When a configuration context includes both the **sendfile** directive and directives that activate a content-changing filter, Nginx automatically disables sendfile for that context
#
# Limits
# limit_conn and limit_conn_zone – Limit the number of client connections Nginx accepts, for example from a single IP address
# limit_rate – Limits the rate at which responses are transmitted to a client, per connection (so clients that open multiple connections can consume this amount of bandwidth for each connection)
# limit_req and limit_req_zone – Limit the rate of requests being processed by Nginx, which has the same benefits as setting **limit_rate**
# max_conns parameter to the **server** directive in an **upstream** configuration block – Sets the maximum number of simultaneous connections accepted by a server in an **upstream** group
# queue (Nginx Plus) – Creates a queue in which requests are placed when all the available servers in the upstream group have reached their max_conns limit
# Caching and compression can improve performance
#
# Caching
# See Nginx Content Caching: https://www.nginx.com/resources/admin-guide/content-caching/
#
# Compression
# See Compression and Decompression: https://www.nginx.com/resources/admin-guide/compression-and-decompression/
# More information
# Benchmarking NGINX: 4 Ways to Improve Accuracy: https://www.nginx.com/resources/library/benchmarking-nginx-4-ways-to-improve-accuracy/
# NGINX documentation: http://nginx.org/en/docs/
# NGINX and NGINX Plus Feature Matrix: https://www.nginx.com/products/feature-matrix/
# NGINX Plus Technical Specifications: https://www.nginx.com/products/technical-specs/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment