Skip to content

Instantly share code, notes, and snippets.

@peanutpi
Last active April 26, 2017 18:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save peanutpi/eeea0d0f8a22d20e5c8c7db99d198680 to your computer and use it in GitHub Desktop.
Save peanutpi/eeea0d0f8a22d20e5c8c7db99d198680 to your computer and use it in GitHub Desktop.
Nginx Tuning

General Tuning

Worker Processes and Worker Connections

nginx uses a fixed number of workers, each of which handles incoming requests. The general rule of thumb is that you should have one worker for each CPU-core your server contains.

You can count the CPUs available to your system by running:

$ grep ^processor /proc/cpuinfo | wc -l
  • But the recommended way is as follows.
# One worker per CPU-core.
worker_processes  auto;

worker_connections setting, which specifies how many connections each worker process can handle. every browser usually opens up at least 2 connections/server, this number can half. The maximum number of connections your server can process is the result of: worker_processes * worker_connections.

enabled multi_accept which causes nginx to attempt to immediately accept as many connections as it can, subject to the kernel socket setup.

epoll event-model is generally recommended for best throughput.

events {
    worker_connections  8096;
    multi_accept        on;
    use                 epoll;
}

worker_rlimit_nofile 40000;

sendfile : To optimize serving static files from the file system, like your logo.

tcp_nopush, and tcp_nodelay: optimize nginx's use of TCP for headers and small bursts of traffic for things like Socket IO or frequent REST calls back to your site.

http {
    sendfile           on;
    tcp_nopush         on;
    tcp_nodelay        on;

    # Your content here ..
}

Timeouts

Timeouts can also drastically improve performance.

The client_body_timeout and client_header_timeout directives are responsible for the time a server will wait for a client body or client header to be sent after request. If neither a body or header is sent, the server will issue a 408 error or Request time out.

keepalive_timeout : To tell allow browsers to keep their connections open for a while so they don't have to reconnect to as often. Nginx will close connections with the client after this period of time.

send_timeout is established not on the entire transfer of answer, but only between two operations of reading; if after this time client will take nothing, then Nginx is shutting down the connection.

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

Compression

However this involves the trade-off common to tuning, performing the compression takes CPU resources from your server, which frequently means that you'd be better off not enabling it at all.

Generally the best approach with compression is to only enable it for large files, and to avoid compressing things that are unlikely to be reduced in size (such as images, executables, and similar binary files).

With that in mind the following is a sensible configuration:

gzip  on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";

This enables compression for files that are over 10k, aren't being requested by early versions of Microsoft's Internet Explorer, and only attempts to compress text-based files.

Static File Caching

A simple way of avoiding your server handling requests is if remote clients believe their content is already up-to-date. This directive can be added to the actual Nginx server block.

To do this you want to set suitable cache-friendly headers, and a simple way of doing that is to declare that all images, etc, are fixed for a given period of time:

     location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
         access_log        off;
         log_not_found     off;
         expires           30d;
     }

Here we've disabled logging of media-files, and configured several suffixes which will be considered valid (and thus not re-fetched) for 30 days. You might want to adjust the period if you're editing your CSS files frequently.

Filehandle Cache

Since we will likely have a few static assets on the file system like logos, CSS files, Javascript, etc that are going to be commonly used across your site it's quite a bit faster to have nginx cache these for short periods of time. Adding this outside of the events block tells nginx to cache 1000 files for 30 seconds, excluding any files that haven't been accessed in 20 seconds, and only files that have 5 times or more. If you aren't deploying frequently you can safely bump up these numbers higher.

If you're serving a large number of static files you'll benefit from keeping filehandles to requested files open - this avoids the need to reopen them in the future.

NOTE: You should only run with this enabled if you're not editing the files at the time you're serving them. Because file accesses are cached any 404s will be cached too, similarly file-sizes will be cached, and if you change them your served content will be out of date.

The following is a good example, which can be placed in either the server section of your nginx config, or within the main http block:

open_file_cache          max=2000 inactive=20s;
open_file_cache_valid    60s;
open_file_cache_min_uses 5;
open_file_cache_errors   off;

This configuration block tells the server to cache 2000 open filehandles, closing handles that have no hits for 20 seconds. The cached handles are considered valid for 60 seconds, and only files that were accessed five times will be considered suitable for caching.

The net result should be that frequently accessed files have handles cached for them, which will cut down on filesystem accesses.

Tip: You might consider the kernel tuning section, which discusses raising the file-handle limit.

Logging

Nginx logs every request that hits the VPS to a log file. If you use analytics to monitor this, you may want to turn this functionality off. Simply edit the access_log directive:

access_log off;

Save and close the file, then run:

sudo service nginx restart

General Webserver Tips

Request Size Limits

Unless you're allowing remote users to upload images, or other large media, you should disable handling of large-requests.

Large requests cause your server to be stuck waiting for the request to be read and processed, and can cause denial-of-service attacks.

Under nginx incoming requests can be limited in size via:

# Don't allow requests of more than 100k in size.
client_max_body_size 100k;

When you just can't scale further

When you can't scale any further you might gain additional performance by placing a caching proxy in front of your server.

The Varnish Cache is highly regarded, but there are also other alternatives such as the venerable Squid cache.

Buffers

Another incredibly important tweak we can make is to the buffer size. If the buffer sizes are too low, then Nginx will have to write to a temporary file causing the disk to read and write constantly. There are a few directives we'll need to understand before making any decisions.

client_body_buffer_size: This handles the client buffer size, meaning any POST actions sent to Nginx. POST actions are typically form submissions.

client_header_buffer_size: Similar to the previous directive, only instead it handles the client header size. For all intents and purposes, 1K is usually a decent size for this directive.

client_max_body_size: The maximum allowed size for a client request. If the maximum size is exceeded, then Nginx will spit out a 413 error or Request Entity Too Large.

large_client_header_buffers: The maximum number and size of buffers for large client headers.

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;

Security

Reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment