Skip to content

Instantly share code, notes, and snippets.

@grahamwhaley
Last active December 1, 2021 09:30
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save grahamwhaley/4315f34a8f1da0932d59ce8de8f902ef to your computer and use it in GitHub Desktop.
Save grahamwhaley/4315f34a8f1da0932d59ce8de8f902ef to your computer and use it in GitHub Desktop.
many pod per node on k8s

How to config k8s to have many pods (on a node)

I had a need (understand this is for some testing, not for a real deployment ;-) ) to run a lot of pods (like >=1k of them) on a single k8s node. Now, I had the hw available - 88cores and 377Gb of RAM - but, k8s has some inbuilt limits by default that will not let you launch more than 110 pods, and if you get past that, you'll hit a network limit at about 250 pods... so, before I forget, here is how to configure to run more...

kubeadm

In your kubeadm init file, something like:

apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# Allowing for CPU pinning and isolation in case of guaranteed QoS class
cpuManagerPolicy: static
systemReserved:
  cpu: 500m
  memory: 256M
kubeReserved:
  cpu: 500m
  memory: 256M
# We want to be able to test 1k pods
maxPods: 1100
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
# For >1k pods, we also need to extend the network to handle more IPs
controllerManager:
  extraArgs:
    node-cidr-mask-size: "20"

minikube

On your minikube command line, something like:

minikube start --vm-driver kvm2 --cpus 80 --memory 307200 --network-plugin=cni --enable-default-cni --container-runtime=cri-o --bootstrapper=kubeadm \
        --extra-config=kubelet.max-pods=1100 \
        --extra-config=controller-manager.node-cidr-mask-size=20

kind

Using a Kubernetes in Docker stack, set up a kind config file like:

# 2 nodes, 1 worker
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3

# Ammend the kubeadm config to support a lot of pods
kubeadmConfigPatches:
- |
  apiVersion: kubelet.config.k8s.io/v1beta1
  kind: KubeletConfiguration
  metadata:
          name: config
  maxPods: 1100
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  metadata:
          name: config
  networking:
    dnsDomain: cluster.local
    podSubnet: 10.244.0.0/16
    serviceSubnet: 10.96.0.0/12
  controllerManager:
    extraArgs:
      node-cidr-mask-size: "20"

nodes:
# the control plane
        - role: control-plane
# the worker
        - role: worker

and invoke such as:

$ kind create cluster --config $(pwd)/kind-config.yaml
@electrocucaracha
Copy link

I haven't tried in Kubespray but it seems like modifying the kubelet_max_pods config value is possible to increase the number of pods per worker node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment