Skip to content

Instantly share code, notes, and snippets.

@voluntas
Forked from techgaun/sysctl.conf
Created October 14, 2017 13:07
Show Gist options
  • Save voluntas/bc54c60aaa7ad6856e6f6a928b79ab6c to your computer and use it in GitHub Desktop.
Save voluntas/bc54c60aaa7ad6856e6f6a928b79ab6c to your computer and use it in GitHub Desktop.
Sysctl configuration for high performance
### KERNEL TUNING ###
# Increase size of file handles and inode cache
fs.file-max = 2097152
# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
# Sets the time before the kernel considers migrating a proccess to another core
kernel.sched_migration_cost_ns = 5000000
# Group tasks by TTY
#kernel.sched_autogroup_enabled = 0
### GENERAL NETWORK SECURITY OPTIONS ###
# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2
# Allowed local port range
net.ipv4.ip_local_port_range = 2000 65535
# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1
# Control Syncookies
net.ipv4.tcp_syncookies = 1
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
### TUNING NETWORK PERFORMANCE ###
# Default Socket Receive Buffer
net.core.rmem_default = 31457280
# Maximum Socket Receive Buffer
net.core.rmem_max = 33554432
# Default Socket Send Buffer
net.core.wmem_default = 31457280
# Maximum Socket Send Buffer
net.core.wmem_max = 33554432
# Increase number of incoming connections
net.core.somaxconn = 65535
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536
# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824
# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.udp_mem = 65536 131072 262144
# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 33554432
net.ipv4.udp_rmem_min = 16384
# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 33554432
net.ipv4.udp_wmem_min = 16384
# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
@chrcoluk
Copy link

chrcoluk commented Aug 20, 2020

net.ipv4.tcp_rfc1337 = 1

Actually complies with rfc1337, and does "not" provide the protection, needs to be set to 0. Info is in the source code.

net.ipv4.tcp_tw_recycle =1 is dangerous will break connectivity to NAT users, this sysctl has now been removed in new kernels.

Also the default rmem and wmem are set way too high, a recipe for resource exhaustion.

@JanZerebecki
Copy link

net.ipv4.tcp_rfc1337 = 1

Actually complies with rfc1337, and does "not" provide the protection, needs to be set to 0. Info is in the source code.

No. By my reading of the source code, tcp_rfc1337 = 1 does protect against time-wait assassinations. See https://serverfault.com/questions/787624/why-isnt-net-ipv4-tcp-rfc1337-enabled-by-default#comment1380898_789212

@denizaydin
Copy link

Hi I have couple of questions regarding buffer tunnings.

  • Why there are different values for TCP and UDP for buffers? net.ipv4.tcp_mem = 786432 1048576 26777216
    net.ipv4.udp_mem = 65536 131072 262144

  • net.ipv4.tcp_wmem = 8192 65536 33554432 seams double value for most of the recommodations. How do you end up with those values? What is MTU value that your are considering?

  • Did you consider any offloading cappacity for the NIC like segmentation offloading and e.t.c.

@terrancewong
Copy link

these a typo here, 26777216 should be 16777216 , 16777216 is 1<<24, or exactly 16MiB

@scroot
Copy link

scroot commented Feb 6, 2023

Better not to blindly copy paste this config.

@terrancewong
Copy link

absolutely!
if you search the internet for 26777216, there's a lot of this typo almost all in sysctl.conf. I'm wondering where's the source of this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment