Skip to content

Instantly share code, notes, and snippets.

@nginx-gists
Last active November 10, 2022 23:50
Show Gist options
  • Save nginx-gists/723e5bfbb46cc9a0b4ecaff7b0f10fbb to your computer and use it in GitHub Desktop.
Save nginx-gists/723e5bfbb46cc9a0b4ecaff7b0f10fbb to your computer and use it in GitHub Desktop.
Using NGINX Plus to Reduce the Frequency of Configuration Reloads
# This NGINX Plus configuration implements route-based session persistence
# and enables the NGINX Plus API. Because the NGINX Plus API is used to
# dynamically configure the servers in the upstream group, servers are not
# defined statically in this file.
# To add an upstream server, run this command, replacing
# <IP-ADDRESS:PORT> and <API-VERSION> with appropriate values:
#
# curl -sX POST -d '{"server":"<IP-ADDRESS:PORT>", "route":"www.example.com"}' http://127.0.0.1:8888/api/<API-VERSION>/http/upstreams/vhosts/servers
# To remove an upstream server, first run this command to learn its internal
# ID, as reported in the 'id' field:
#
# curl -s http://127.0.0.1:8888/api/<API-version>/http/upstreams/vhosts/servers | jq -c '.peers[] | {server: .server, id: .id}'
#
# Then run this command to remove the server:
#
# curl -sX DELETE http://127.0.0.1:8888/api/version/http/upstreams/vhosts/servers/<SERVER-ID>
# For more information, see the blog post associated with this file:
# https://www.nginx.com/blog/using-nginx-plus-to-reduce-the-frequency-of-configuration-reloads/
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
upstream vhosts {
# Shared memory zone for state information about servers in
# the upstream group
zone vhosts 128k;
# Route-based session persistence based on the 'Host' header
sticky route $http_host;
# There are no 'server' directives here because upstream
# servers are configured dynamically with the NGINX Plus API.
}
server {
# Production frontend listening on standard HTTP port
listen 80;
# Enables collection of metrics for 'vhosts' upstream group
status_zone vhosts;
location / {
proxy_pass http://vhosts;
proxy_set_header Host $http_host;
}
}
server {
# Dedicated server for upstream management with separate port
# that can be secured by a firewall
listen 8080;
# Additional security for the management server
allow 10.0.0.0/24;
allow 127.0.0.1/32;
deny all;
# Location of NGINX Plus static files
root /usr/share/nginx/html;
# Enables the NGINX Plus API
location /api {
api write=on;
}
# HTML page for NGINX Plus dashboard
location = /dashboard.html { }
# Route requests for / to the dashboard
location = / {
return 301 /dashboard.html;
}
# Redirect requests made to the pre-R14 dashboard
location = /status.html {
return 301 /dashboard.html;
}
}
}
# vim: syntax=nginx
# The configuration in this file can be used to illustrate the effect of
# frequently reloading the NGINX configuration under conditions of high load.
# New nginx worker processes are started to use the newly loaded configuration,
# but existing worker processes continue to run until established
# connections can be terminated. Large numbers of worker processes can use
# excessive memory and eventually cause system overload.
# As discussed on our blog
#
# https://www.nginx.com/blog/using-nginx-plus-to-reduce-the-frequency-of-configuration-reloads/
#
# using the NGINX Plus API to dynamically reconfigure the servers in upstream
# groups is one way to reduce the frequency of reloads.
# To simulate the negative effects of frequent reloads:
# 1) Run nginx with this configuration file, and display the process list:
#
# ps ax | grep nginx
# 2) Generate several requests, each of which takes about a minute:
#
# curl -v http://localhost/ & curl -v http://localhost/ & curl -v http://localhost/ & curl -v http://localhost/
# 3) While the requests are processing, reload the configuration:
#
# nginx -s reload
# 4) Display the process list. There are likely to be a number of processes in
# the "shutting down" state:
#
# ps ax | grep nginx
# 1304 ? S 0:00 \_ nginx: worker process is shutting down
user nginx;
worker_processes 8;
events {
worker_connections 2014;
}
http {
default_type text/plain;
tcp_nodelay on;
server {
listen 8081;
return 200 "Server $server_addr:$server_port\n\nTime: $time_local\n\n
Syntax: proxy_limit_rate rate;
Default: proxy_limit_rate 0;
Context: http, server, location
Limits how quickly the response from the
proxied server is read, in bytes per second.
The limit applies to each request, so if a worker
process simultaneously opens two connections to a
proxied server, the overall rate is twice the
specified limit.
The default value of 0 (zero) disables rate limiting.
The directive is effective only if buffering of
responses from the proxied server is enabled,
which it is by default (controlled by the
'proxy_buffering on' directive.)
";
}
upstream backend {
zone backend 64k;
server 127.0.0.1:8081;
}
server {
listen 80 reuseport; # Hint: test with and without the reuseport parameter
proxy_limit_rate 20; # Limit rate to 20 bps to emulate long-lived connections
location / {
proxy_pass http://backend;
}
}
}
# vim: syntax=nginx
@nginx-gists
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment