Skip to content

Instantly share code, notes, and snippets.

@amir-shehzad
Forked from HacKanCuBa/gunicorn.py
Created March 17, 2024 07:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save amir-shehzad/72f22c365cdb118ef5724f2ebe66e804 to your computer and use it in GitHub Desktop.
Save amir-shehzad/72f22c365cdb118ef5724f2ebe66e804 to your computer and use it in GitHub Desktop.
A config file of gunicorn(http://gunicorn.org/) contains fundamental configuration.
"""Gunicorn config file.
by HacKan (https://hackan.net)
Find it at: https://gist.github.com/HacKanCuBa/275bfca09d614ee9370727f5f40dab9e
Based on: https://gist.github.com/KodeKracker/6bc6a3a35dcfbc36e2b7
Changelog
=========
See revisions to access other versions of this file.
2020-09-04 preload app by default and remove derogatory terms
2020-01-13 updated for v20.0
2019-10-01 clarified that this settings were for v19.9
2019-09-30 several fixes and missing settings addition
2019-09-26 forked, minor changes (mostly aesthetic, some of significance)
"""
# Gunicorn (v20.0) Configuration File
# Reference - https://docs.gunicorn.org/en/20.0.4/settings.html
#
# To run gunicorn by using this config, run gunicorn by passing
# config file path, ex:
#
# $ gunicorn --config=gunicorn.py MODULE_NAME:VARIABLE_NAME
#
import multiprocessing
from tempfile import mkdtemp
# ===============================================
# Server Socket
# ===============================================
# bind - The server socket to bind
bind = '127.0.0.1:8000'
# backlog - The maximum number of pending connections
# Generally in range 64-2048
backlog = 2048
# ===============================================
# Worker Processes
# ===============================================
# workers - The number of worker processes for handling requests.
# A positive integer generally in the 2-4 x $(NUM_CORES) range
workers = multiprocessing.cpu_count() * 2 + 1
# worker_class - The type of workers to use
# A string referring to one of the following bundled classes:
# 1. sync
# 2. eventlet - Requires eventlet >= 0.9.7
# 3. gevent - Requires gevent >= 0.13
# 4. tornado - Requires tornado >= 0.2
# 5. gthread - Python 2 requires the futures package to be installed (or
# install it via pip install gunicorn[gthread])
# 6. uvicorn - uvicorn.workers.UvicornWorker
#
# You’ll want to read http://docs.gunicorn.org/en/latest/design.html
# for information on when you might want to choose one of the other
# worker classes.
# See also: https://www.uvicorn.org/deployment/
worker_class = 'sync'
# threads - The number of worker threads for handling requests. This will
# run each worker with the specified number of threads.
# A positive integer generally in the 2-4 x $(NUM_CORES) range
threads = 1
# worker_connections - The maximum number of simultaneous clients.
# This setting only affects the Eventlet and Gevent worker types.
worker_connections = 1000
# max_requests - The maximum number of requests a worker will process
# before restarting
# Any value greater than zero will limit the number of requests a work
# will process before automatically restarting. This is a simple method
# to help limit the damage of memory leaks.
max_requests = 10000
# max_requests_jitter - The maximum jitter to add to the max-requests setting
# The jitter causes the restart per worker to be randomized by
# randint(0, max_requests_jitter). This is intended to stagger worker
# restarts to avoid all workers restarting at the same time.
max_requests_jitter = 1000
# timeout - Workers silent for more than this many seconds are killed
# and restarted
timeout = 30
# graceful_timeout - Timeout for graceful workers restart
# How max time worker can handle request after got restart signal.
# If the time is up worker will be force killed.
graceful_timeout = 30
# keep_alive - The number of seconds to wait for requests on a
# Keep-Alive connection
# Generally set in the 1-5 seconds range.
keep_alive = 2
# ===============================================
# Security
# ===============================================
# limit_request_line - The maximum size of HTTP request line in bytes
# Value is a number from 0 (unlimited) to 8190.
# This parameter can be used to prevent any DDOS attack.
limit_request_line = 1024
# limit_request_fields - Limit the number of HTTP headers fields in a request
# This parameter is used to limit the number of headers in a request to
# prevent DDOS attack. Used with the limit_request_field_size it allows
# more safety.
# By default this value is 100 and can’t be larger than 32768.
limit_request_fields = 100
# limit_request_field_size - Limit the allowed size of an HTTP request
# header field.
# Value is a number from 0 (unlimited) to 8190.
limit_request_field_size = 1024
# ===============================================
# Debugging
# ===============================================
# reload - Restart workers when code changes
reload = False
# reload_engine - The implementation that should be used to power reload.
# Valid engines are:
# ‘auto’ (default)
# ‘poll’
# ‘inotify’ (requires inotify)
reload_engine = 'auto'
# reload_extra_files - Extends reload option to also watch and reload on
# additional files (e.g., templates, configurations, specifications, etc.).
reload_extra_files = []
# spew - Install a trace function that spews every line executed by the server
spew = False
# check_config - Check the configuration
check_config = False
# ===============================================
# Server Mechanics
# ===============================================
# preload_app - Load application code before the worker processes are forked
# By preloading an application you can save some RAM resources as well as
# speed up server boot times. Although, if you defer application loading to
# each worker process, you can reload your application code easily by
# restarting workers.
preload_app = True
# sendfile - Enables or disables the use of sendfile()
sendfile = False
# reuse_port - Set the SO_REUSEPORT flag on the listening socket.
reuse_port = False
# chdir - Chdir to specified directory before apps loading
chdir = ''
# daemon - Daemonize the Gunicorn process.
# Detaches the server from the controlling terminal and enters the background.
daemon = False
# raw_env - Set environment variable (key=value)
# Pass variables to the execution environment.
raw_env = []
# pidfile - A filename to use for the PID file
# If not set, no PID file will be written.
pidfile = None
# worker_tmp_dir - A directory to use for the worker heartbeat temporary file
# If not set, the default temporary directory will be used.
worker_tmp_dir = mkdtemp(prefix='gunicorn_')
# user - Switch worker processes to run as this user
# A valid user id (as an integer) or the name of a user that can be retrieved
# with a call to pwd.getpwnam(value) or None to not change the worker process
# user
user = None
# group - Switch worker process to run as this group.
# A valid group id (as an integer) or the name of a user that can be retrieved
# with a call to pwd.getgrnam(value) or None to not change the worker
# processes group.
group = None
# umask - A bit mask for the file mode on files written by Gunicorn
# Note that this affects unix socket permissions.
# A valid value for the os.umask(mode) call or a string compatible with
# int(value, 0) (0 means Python guesses the base, so values like “0”, “0xFF”,
# “0022” are valid for decimal, hex, and octal representations)
umask = 0
# initgroups - If true, set the worker process’s group access list with all of
# the groups of which the specified username is a member, plus the specified
# group id.
initgroups = False
# tmp_upload_dir - Directory to store temporary request data as they are read
# This path should be writable by the process permissions set for Gunicorn
# workers. If not specified, Gunicorn will choose a system generated temporary
# directory.
tmp_upload_dir = None
# secure_scheme_headers - A dictionary containing headers and values that the
# front-end proxy uses to indicate HTTPS requests. These tell gunicorn to set
# wsgi.url_scheme to “https”, so your application can tell that the request is
# secure.
secure_scheme_headers = {
'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https',
'X-FORWARDED-SSL': 'on',
}
# forwarded_allow_ips - Front-end’s IPs from which allowed to handle set
# secure headers (comma separate)
# Set to “*” to disable checking of Front-end IPs (useful for setups where
# you don’t know in advance the IP address of Front-end, but you still trust
# the environment)
forwarded_allow_ips = '127.0.0.1'
# pythonpath - A comma-separated list of directories to add to the Python path.
# e.g. '/home/djangoprojects/myproject,/home/python/mylibrary'.
pythonpath = None
# paste - Load a PasteDeploy config file. The argument may contain a # symbol
# followed by the name of an app section from the config file,
# e.g. production.ini#admin.
# At this time, using alternate server blocks is not supported. Use the command
# line arguments to control server configuration instead.
paste = None
# proxy_protocol - Enable detect PROXY protocol (PROXY mode).
# Allow using Http and Proxy together. It may be useful for work with stunnel
# as https frontend and gunicorn as http server.
# PROXY protocol: http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
proxy_protocol = False
# proxy_allow_ips - Front-end’s IPs from which allowed accept proxy requests
# (comma separate)
# Set to “*” to disable checking of Front-end IPs (useful for setups where you
# don’t know in advance the IP address of Front-end, but you still trust the
# environment)
proxy_allow_ips = '127.0.0.1'
# raw_paste_global_conf - Set a PasteDeploy global config variable in key=value
# form.
# The option can be specified multiple times.
# The variables are passed to the the PasteDeploy entrypoint. Example:
# $ gunicorn -b 127.0.0.1:8000 --paste development.ini --paste-global FOO=1
# --paste-global BAR=2
raw_paste_global_conf = []
# strip_header_spaces - Strip spaces present between the header name and
# the `:`. This is known to induce vulnerabilities and is not compliant with
# the HTTP/1.1 standard. See
# https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn
# Use with care and only if necessary.
strip_header_spaces = False
# ===============================================
# SSL
# ===============================================
# keyfile - SSL Key file
keyfile = None
# certfile - SSL Certificate file
certfile = None
# ssl_version - SSL Version to use (see stdlib ssl module’s)
# TLS Negotiate highest possible version between client/server. Can yield SSL.
# (Python 3.6+)
# TLSv1 TLS 1.0
# TLSv1_1 TLS 1.1 (Python 3.4+)
# TLSv1_2 TLS 1.2 (Python 3.4+)
# TLS_SERVER Auto-negotiate the highest protocol version like TLS, but only
# support server-side SSLSocket connections. (Python 3.6+)
ssl_version = 'TLSv1_2'
# cert_reqs - Whether client certificate is required (see stdlib ssl module’s)
cert_reqs = 0
# ca_certs - CA certificates file
ca_certs = None
# suppress_ragged_eofs - Suppress ragged EOFs (see stdlib ssl module’s)
suppress_ragged_eofs = True
# do_handshake_on_connect - Whether to perform SSL handshake on socket connect
# (see stdlib ssl module’s)
do_handshake_on_connect = False
# ciphers - SSL Cipher suite to use, in the format of an OpenSSL cipher list.
ciphers = None
# ===============================================
# Logging
# ===============================================
# accesslog - The Access log file to write to.
# “-” means log to stdout.
accesslog = '-'
# access_log_format - The access log format
#
# Identifier | Description
# ------------------------------------------------------------
# h -> remote address
# l -> ‘-‘
# u -> user name
# t -> date of the request
# r -> status line (e.g. GET / HTTP/1.1)
# m -> request method
# U -> URL path without query string
# q -> query string
# H -> protocol
# s -> status
# B -> response length
# b -> response length or ‘-‘ (CLF format)
# f -> referer
# a -> user agent
# T -> request time in seconds
# D -> request time in microseconds
# L -> request time in decimal seconds
# p -> process ID
# {header}i -> request header
# {header}o -> response header
# {variable}e -> environment variable
# ---------------------------------------------------------------
#
# Use lowercase for header and environment variable names, and put {...}x names
# inside %(...)s. For example:
#
# %({x-forwarded-for}i)s
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
# disable_redirect_access_to_syslog - Disable redirect access logs to syslog.
disable_redirect_access_to_syslog = False
# errorlog - The Error log file to write to.
# “-” means log to stderr.
errorlog = '-'
# loglevel - The granularity of Error log outputs.
# Valid level names are:
# 1. debug
# 2. info
# 3. warning
# 4. error
# 5. critical
loglevel = 'info'
# capture_output - Redirect stdout/stderr to specified file in errorlog.
capture_output = False
# logger_class - The logger you want to use to log events in gunicorn.
# The default class (gunicorn.glogging.Logger) handle most of normal usages
# in logging. It provides error and access logging.
logger_class = 'gunicorn.glogging.Logger'
# logconfig - The log config file to use. Gunicorn uses the standard Python
# logging module’s Configuration file format.
logconfig = None
# logconfig_dict - The log config dictionary to use, using the standard
# Python logging module’s dictionary configuration format. This option
# takes precedence over the logconfig option, which uses the older file
# configuration format.
# Format:
# https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig
logconfig_dict = {}
# syslog_addr - Address to send syslog messages.
#
# Address is a string of the form:
# ‘unix://PATH#TYPE’ : for unix domain socket. TYPE can be ‘stream’ for the
# stream driver or ‘dgram’ for the dgram driver.
# ‘stream’ is the default.
# ‘udp://HOST:PORT’ : for UDP sockets
# ‘tcp://HOST:PORT‘ : for TCP sockets
syslog_addr = 'udp://localhost:514'
# syslog - Send Gunicorn logs to syslog
syslog = False
# syslog_prefix - Makes gunicorn use the parameter as program-name in the
# syslog entries.
# All entries will be prefixed by gunicorn.<prefix>. By default the program
# name is the name of the process.
syslog_prefix = None
# syslog_facility - Syslog facility name
syslog_facility = 'user'
# enable_stdio_inheritance - Enable stdio inheritance
# Enable inheritance for stdio file descriptors in daemon mode.
# Note: To disable the python stdout buffering, you can to set the user
# environment variable PYTHONUNBUFFERED .
enable_stdio_inheritance = False
# statsd_host - host:port of the statsd server to log to
statsd_host = None
# statsd_prefix - Prefix to use when emitting statsd metrics (a trailing . is
# added, if not provided)
statsd_prefix = ''
# dogstatsd_tags - A comma-delimited list of datadog statsd (dogstatsd) tags to
# append to statsd metrics.
dogstatsd_tags = ''
# ===============================================
# Process Naming
# ===============================================
# proc_name - A base to use with setproctitle for process naming.
# This affects things like `ps` and `top`.
# It defaults to ‘gunicorn’.
proc_name = 'gunicorn'
# ===============================================
# Server Hooks
# ===============================================
def on_starting(server):
"""
Execute code just before the main process is initialized.
The callable needs to accept a single instance variable for the Arbiter.
"""
def on_reload(server):
"""
Execute code to recycle workers during a reload via SIGHUP.
The callable needs to accept a single instance variable for the Arbiter.
"""
def when_ready(server):
"""
Execute code just after the server is started.
The callable needs to accept a single instance variable for the Arbiter.
"""
def pre_fork(server, worker):
"""
Execute code just before a worker is forked.
The callable needs to accept two instance variables for the Arbiter and
new Worker.
"""
def post_fork(server, worker):
"""
Execute code just after a worker has been forked.
The callable needs to accept two instance variables for the Arbiter and
new Worker.
"""
def post_worker_init(worker):
"""
Execute code just after a worker has initialized the application.
The callable needs to accept one instance variable for the initialized
Worker.
"""
def worker_int(worker):
"""
Execute code just after a worker exited on SIGINT or SIGQUIT.
The callable needs to accept one instance variable for the initialized
Worker.
"""
def worker_abort(worker):
"""
Execute code when a worker received the SIGABRT signal.
This call generally happens on timeout.
The callable needs to accept one instance variable for the initialized
Worker.
"""
def pre_exec(server):
"""
Execute code just before a new main process is forked.
The callable needs to accept a single instance variable for the Arbiter.
"""
def pre_request(worker, req):
"""
Execute code just before a worker processes the request.
The callable needs to accept two instance variables for the Worker and
the Request.
"""
worker.log.debug('%s %s', req.method, req.path)
def post_request(worker, req, environ, resp):
"""
Execute code after a worker processes the request.
The callable needs to accept two instance variables for the Worker and
the Request.
"""
def child_exit(server, worker):
"""
Execute code just after a worker has been exited, in the main process.
The callable needs to accept two instance variables for the Arbiter and the
just-exited Worker.
"""
def worker_exit(server, worker):
"""
Execute code just after a worker has been exited.
The callable needs to accept two instance variables for the Arbiter and
the just-exited Worker.
"""
def nworkers_changed(server, new_value, old_value):
"""
Execute code just after num_workers has been changed.
The callable needs to accept an instance variable of the Arbiter and two
integers of number of workers after and before change.
If the number of workers is set for the first time, old_value would be
None.
"""
def on_exit(server):
"""
Execute code just before exiting gunicorn.
The callable needs to accept a single instance variable for the Arbiter.
"""
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment