Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@runlevel5
Last active January 11, 2024 05:06
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save runlevel5/5064c1121e5b41d9a246b587f602bd91 to your computer and use it in GitHub Desktop.
Save runlevel5/5064c1121e5b41d9a246b587f602bd91 to your computer and use it in GitHub Desktop.
Setup containerd for K8S on Alpine Linux
#!/bin/bash
set -e
## CRI containrd
## CNI flannel
# Alpine Edge only! Let's hope alpine 3.13 or 3.14 would have k8s in main tree
echo http://dl-cdn.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
echo http://dl-cdn.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories
# Install containerd
apk add containerd containerd-openrc
# Load required modules
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
# Setup required sysctl params, these persist across reboots.
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
# Create default containerd config
mkdir -p /etc/containerd
# See https://git.alpinelinux.org/aports/commit/?id=72a355e1c8437c4e32a3e22bc3888905a6e545ba for details on why we need update the plugin dir
containerd config default | sed "s|/opt/cni/bin|/usr/libexec/cni|g" > /etc/containerd/config.toml
rc-update add containerd default
rc-service containerd start
# K8S - the main part!
apk add kubernetes conntrack-tools kubeadm kubectl kubelet cri-tools
rc-update add kubelet default
# Prevent conflicts between docker iptables (packet filtering) rules and k8s pod communication
# See https://github.com/kubernetes/kubernetes/issues/40182 for further details.
iptables -P FORWARD ACCEPT
# disable swap
sed -i '/swap/d' /etc/fstab
swapoff -a
# set up cluster
echo "172.42.42.100 kmaster kmaster.example.com" >> /etc/hosts
echo "172.42.42.101 kworker1 kworker1.example.com" >> /etc/hosts
echo "172.42.42.102 kmaster2 kworker2.example.com" >> /etc/hosts
## set up master node - DO NOT do this step for worker node!!
kubeadm config images pull
## start 2 sessions. The 1st session we will try to kickstart the cluster
kubeadm init --cri-socket /run/containerd/containerd.sock --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16 --v=5
## once we get
## [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
## we will kickstart kubelet daemon in the 2nd session
rc-service kubelet start
# Copy Kube admin config
echo "Copy kube admin config to user .kube directory"
# for root user
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# verify that we have our master node
kubectl get nodes
# please save the output of join command for the worker node
kubeadm token create --print-join-command > join.sh
## Next let's setup our worker node, simply repeat the same step to get kubelet running first, then run the join.sh command
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment