Skip to content

Instantly share code, notes, and snippets.

@PhilipSchmid
Last active December 18, 2023 11:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save PhilipSchmid/e34a725d5836d21432fd10b0709a5c4a to your computer and use it in GitHub Desktop.
Save PhilipSchmid/e34a725d5836d21432fd10b0709a5c4a to your computer and use it in GitHub Desktop.
Minimal guide for setting up a kubeadm and containerd based Kubernetes 1.26 cluster with Cilium in kubeproxy-replacement mode (tested on Ubuntu 22.04)

kubeadm Cluster Setup

Prerequisites (ControlPlane & Worker Node)

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv6.conf.all.rp_filter = 0
EOF

# IMPORTANT: Ubuntu 22.04 also stores some default sysctl values inside `/usr/lib/sysctl.d/`, e.g. `/usr/lib/sysctl.d/50-default.conf`. Ensure our values aren't also configured anywhere else by having a careful look at the output of the following command: 

sudo sysctl --system

# Optional, hacky sysctl verification command:
sudo sysctl -a | grep -E '.*\.rp_filter|.*\.forward.*|.*\.bridge-nf-call-.*'

Installation (ControlPlane & Worker Node)

Crictl

VERSION="v1.29.0" # check latest version from https://github.com/kubernetes-sigs/cri-tools/releases
curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
crictl
rm -f crictl-$VERSION-linux-amd64.tar.gz

Containerd

sudo apt install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y containerd.io

sudo mkdir -p /etc/containerd
sudo sh -c 'containerd config default>/etc/containerd/config.toml'

# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
sudo sed -i 's/SystemdCgroup.*/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl enable --now containerd
sudo systemctl status containerd

# Verify containerd:
sudo crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps

# Optional: If you get an error like "ListContainers with filter from runtime service failed", restart the containerd service once again:
sudo systemctl restart containerd

Kubeadm

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://dl.k8s.io/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update

K8S_VERSION=1.28.4-00
sudo apt install -y kubelet=$K8S_VERSION kubeadm=$K8S_VERSION kubectl=$K8S_VERSION

sudo apt-mark hold kubelet kubeadm kubectl

Cluster Creation

Init (ControlPlane Node)

sudo kubeadm config images pull

# Ensure to use an IP which is actually assigned to one of the node's interfaces (e.g. the private IP on EC2 instances).
IPADDR="<YOUR-CP-NODE-01-OR-LB-IP-HERE>"
NODENAME=$(hostname -s)

sudo kubeadm init --apiserver-advertise-address=$IPADDR \
  --apiserver-cert-extra-sans=$IPADDR \
  --pod-network-cidr=100.64.0.0/14 \
  --service-cidr=100.68.0.0/16 \
  --node-name $NODENAME \
  --skip-phases=addon/kube-proxy \
  --ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables

Optional: Join Worker Nodes (Worker Node)

NODENAME=$(hostname -s)

sudo kubeadm join <YOUR-CP-NODE-01-OR-LB-IP-HERE>:6443 \
  --token <GENERATED-TOKEN-HERE> \
  --node-name $NODENAME \
  --discovery-token-ca-cert-hash <GENERATED-SHA256-HERE> \
  --ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables

Verification (ControlPlane Node)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes -o wide

Cilium (ControlPlane Node or your Client)

Using Helm Chart

kubeProxyReplacement: "true"

k8sServiceHost: <YOUR-CP-NODE-01-OR-LB-IP-HERE>
k8sServicePort: 6443

hubble:
  relay:
    enabled: true

ipam:
  operator:
    clusterPoolIPv4PodCIDRList:
    - "100.64.0.0/14"
helm repo add cilium https://helm.cilium.io/

helm repo update

helm upgrade -i cilium cilium/cilium \
  --version 1.14.5 \
  --namespace kube-system \
  -f values.yaml

Using CLI

Note: Some flags are automatically set depending on the detected infrastructure (for example kubeProxyReplacement, k8sServiceHost, k8sServicePort or even operator.replicas).

# Check for available versions:
cilium install --list-versions
# Do the actual installation:
cilium install --version 1.14.5 \
  --helm-set "hubble.relay.enabled=true" \
  --helm-set "ipam.operator.clusterPoolIPv4PodCIDRList[0]=100.64.0.0/14"

This cilium install example generates the following helm install in the background:

helm template --namespace kube-system cilium cilium/cilium --version 1.14.5 --set bpf.masquerade=true,cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,hubble.relay.enabled=true,ipam.operator.clusterPoolIPv4PodCIDRList[0]=100.64.0.0/14,k8sServiceHost=172.31.41.213,k8sServicePort=6443,kubeProxyReplacement=true,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan

Sources

Appendix

cgroup(v2)

  • Excellent explanation of cgroup v2.
  • Cilium KPR (kubeProxyReplacement=strict) requires cgroupv2. However, if the underlying system doesn’t have it enabled, Cilium will try to mount its own instance of v2 and attach BPF sock LB programs to the v2 root. As long as v2 isn’t disabled in the Kernel, this should work.
  • It’s critical that the kubelet and the container runtime (containerd in this case) are using the same cgroup driver and are configured the same. So, for that reason, when using containerd, you need to manually configure SystemdCgroup = true inside /etc/containerd/config.toml when using Ubuntu 22.04 (which is systemd-based). Regarding Kubelet, you don’t need to configure anything, as cgroupDriver: systemd is already the default since K8s 1.22.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment