Skip to content

Instantly share code, notes, and snippets.

@chenchun
Last active January 19, 2021 08:00
Show Gist options
  • Save chenchun/897b80743690ba0622530889cbbc4dc2 to your computer and use it in GitHub Desktop.
Save chenchun/897b80743690ba0622530889cbbc4dc2 to your computer and use it in GitHub Desktop.
kubernetes cgroup

Node Allocatable

这是一个用于说明节点可分配(Node Allocatable)计算方式的示例:

节点拥有 32Gi memeory,16 CPU 和 100Gi Storage 资源

  • --kube-reserved 被设置为 cpu=1,memory=2Gi,ephemeral-storage=1Gi
  • --system-reserved 被设置为 cpu=500m,memory=1Gi,ephemeral-storage=1Gi
  • --eviction-hard 被设置为 memory.available<500Mi,nodefs.available<10%

在这个场景下,Allocatable 将会是 14.5 CPUs、28.5Gi 内存以及 88Gi 本地存储。 调度器保证这个节点上的所有 Pod 的内存 requests 总量不超过 28.5Gi, 存储不超过 88Gi。 当 Pod 的内存使用总量超过 28.5Gi 或者磁盘使用总量超过 88Gi 时, kubelet 将会驱逐它们。 如果节点上的所有进程都尽可能多地使用 CPU,则 Pod 加起来不能使用超过 14.5 CPUs 的资源。

当没有执行 kube-reserved 和/或 system-reserved 策略且系统守护进程 使用量超过其预留时,如果节点内存用量高于 31.5Gi 或存储大于 90Gi, kubelet 将会驱逐 Pod。

https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#example-scenario

Pod Qos cgroup配置

--cpu-manager-policy=static

对于每种类型

  • memory.soft_limit_in_bytes都是没有配置的
  • cpu.cfs_period_us=100000

Guaranteed Pod

        resources:
          limits:
            memory: 100M
            cpu: "2"
          requests:
            memory: 100M
            cpu: "2"
            

/kubepods/pod<UID>
cpuset.cpus=0-23
cpu.cfs_quota_us=200000
cpu.shares=2048
memory.limit_in_bytes=100M

/kubepods/pod<UID>/<pause containerId>
cpuset.cpus=0-23
cpu.cfs_quota_us=-1
cpu.shares=2
memory.limit_in_bytes=unlimited

/kubepods/pod<UID>/<user containerId>
cpuset.cpus=1,13
cpu.cfs_quota_us=200000
cpu.shares=2048
memory.limit_in_bytes=100M

Burstable Pod

        resources:
          limits:
            memory: 200M
            cpu: "2"
          requests:
            memory: 100M
            cpu: "1"

/kubepods/burstable/pod<UID>
cpuset.cpus=0-23
cpu.cfs_quota_us=200000
cpu.shares=1024
memory.limit_in_bytes=200M

/kubepods/burstable/pod<UID>/<pause containerId>
cpuset.cpus=0-23
cpu.cfs_quota_us=-1
cpu.shares=2
memory.limit_in_bytes=unlimited

/kubepods/burstable/pod<UID>/<user containerId>
cpuset.cpus=0,2-12,14-23
cpu.cfs_quota_us=200000
cpu.shares=1024
memory.limit_in_bytes=200M

Besteffort pod

/kubepods/besteffort/pod<UID>
cpuset.cpus=0-23
cpu.cfs_quota_us=-1
cpu.shares=2
memory.limit_in_bytes=unlimited

/kubepods/besteffort/pod<UID>/<pause containerId>
cpuset.cpus=0-23
cpu.cfs_quota_us=-1
cpu.shares=2
memory.limit_in_bytes=unlimited

/kubepods/besteffort/pod<UID>/<user containerId>
cpuset.cpus=0,2-12,14-23
cpu.cfs_quota_us=-1
cpu.shares=2
memory.limit_in_bytes=unlimited

burstable besteffort

evictionHard:
  memory.available: 100Mi
kubeReserved:
  cpu: 100m
  memory: 500Mi
systemReserved:
  cpu: 100m
  memory: 500Mi
  
# cat /proc/meminfo | head -n 3
MemTotal:       65607308 kB
MemFree:        39594008 kB
MemAvailable:   58779576 kB

# lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-2420 v2 @ 2.20GHz
Stepping:              4
CPU MHz:               2201.000
BogoMIPS:              4402.87
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23
/kubepods
cpuset.cpus=0-23
memory.limit_in_bytes=66133307392
cpu.shares=24371
cpu.cfs_quota_us=-1

/kubepods/burstable
cpuset.cpus=0-23
cpu.shares = max(sum(Burstable pods cpu requests), 2) = 1792
cpu.cfs_quota_us=-1
memory.limit_in_bytes=unlimited

/kubepods/besteffort
cpuset.cpus=0-23
cpu.shares = 2
cpu.cfs_quota_us=-1
memory.limit_in_bytes=unlimited
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment