Skip to content

Instantly share code, notes, and snippets.

@pecigonzalo
Created July 27, 2020 19:30
Show Gist options
  • Save pecigonzalo/c86f9e9b5de5464afbfe370f4f9bae27 to your computer and use it in GitHub Desktop.
Save pecigonzalo/c86f9e9b5de5464afbfe370f4f9bae27 to your computer and use it in GitHub Desktop.
Linux running
# Required by intensive applications like ElasticSeach
vm.max_map_count=262144
vm.swappiness = 10
vm.dirty_ratio = 80
vm.dirty_background_ratio = 5
vm.dirty_expire_centisecs = 12000
# Max listen queue backlog
net.core.somaxconn = 1000
# Max number of packets that can be queued on interface input
# If kernel is receiving packets faster than can be processed
# this queue increases
net.core.netdev_max_backlog = 5000
# Max receive buffer size
net.core.rmem_max = 16777216
# Max send buffer size
net.core.wmem_max = 16777216
# Default receive buffer size
net.core.rmem_default=65536
# Default send buffer size
net.core.wmem_default=65536
# The first value tells the kernel the minimum receive/send buffer for each TCP connection,
# and this buffer is always allocated to a TCP socket,
# even under high pressure on the system. …
# The second value specified tells the kernel the default receive/send buffer
# allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default
# value used by other protocols. … The third and last value specified
# in this variable specifies the maximum receive/send buffer that can be allocated for a TCP socket.
# Note: The kernel will auto tune these values between the min-max range
# If for some reason you wanted to change this behavior, disable net.ipv4.tcp_moderate_rcvbuf
net.ipv4.tcp_wmem = 4096 12582912 16777216
net.ipv4.tcp_rmem = 4096 12582912 16777216
# Units are in page size (default page size is 4 kb)
# These are global variables affecting total pages for TCP
# sockets
# 8388608 * 4 = 32 GB
# low pressure high
# When mem allocated by TCP exceeds “pressure”, kernel will put pressure on TCP memory
# We set all these values high to basically prevent any mem pressure from ever occurring
# on our TCP sockets
net.ipv4.tcp_mem=8388608 8388608 8388608
# Increase max half-open connections.
net.ipv4.tcp_max_syn_backlog = 8096
# Increase max number of sockets allowed in TIME_WAIT
net.ipv4.tcp_max_tw_buckets=6000000
# Increase max TCP orphans
# These are sockets which have been closed and no longer have a file handle attached to them
net.ipv4.tcp_max_orphans=262144
# Only retry creating TCP connections twice
# Minimize the time it takes for a connection attempt to fail
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
# Timeout closing of TCP connections after 7 seconds
net.ipv4.tcp_fin_timeout = 7
# Avoid falling back to slow start after a connection goes idle
# keeps our cwnd large with the keep alive connections
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_abort_on_overflow = 1
net.ipv4.ip_local_port_range = 10240 65535
# Set noop scheduler
# /sys/block/*/queue/scheduler noop
ACTION=="add|change", KERNEL=="xv*", SUBSYSTEM=="block", ATTR{queue/scheduler}="noop"
# Send the completed request back to the actual CPU that requested it
# /sys/block/*/queue/rq_affinity 2
ACTION=="add|change", KERNEL=="xv*", SUBSYSTEM=="block", ATTR{queue/rq_affinity}="2"
# Amount of I/O requests that get buffered before the I/O scheduler sends / receives data to the block device
# /sys/block/*/queue/nr_requests 256
ACTION=="add|change", KERNEL=="xv*", SUBSYSTEM=="block", ATTR{queue/nr_requests}="256"
# Amount of extra data that will get read when the OS reads a file
# /sys/block/*/queue/read_ahead_kb 256
ACTION=="add|change", KERNEL=="xv*", SUBSYSTEM=="block", ATTR{queue/read_ahead_kb}="256"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment