Skip to content

Instantly share code, notes, and snippets.

@dduportal
Last active September 24, 2021 01:35
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dduportal/380f86ff2972a55b2d840e0022164313 to your computer and use it in GitHub Desktop.
Save dduportal/380f86ff2972a55b2d840e0022164313 to your computer and use it in GitHub Desktop.
Kubernetes on Raspberry Pis

Kubernetes 1.13 on Raspberry Pis

This tutoriel is HEAVILY inspired by https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975, but adapted (docker/kubernetes versions + custom scripts + a hacky "background script" to tune the timeout of Kubeadm)

Requirements

  • Hypriot 1.9.x (1.10 not tested yet because rc)
  • Kernel >= 4.14.54
  • Raspberry Pis
  • SD cards class 10

Prepare SD card

Boot and Prepare Pi(s)

For each Pi:

  • Insert and SD an boot the Pi
  • Connect to the Pi with ssh pirate@raspberrypi.local. Password is hypriot. If you cannot access the machine, search for its IP with nmap -sP 192.168.0.1/24 (adapt network and mask to your setup).
  • Curl the 2 shell scripts below, using the RAW URLs
  • Execute the prepare-system.sh script with sudo, passing hostname and expected static IP as arguments: sudo bash /boot/prepare-system.sh <hostname> <IP>. After a while, the machine will reboot by itself.
  • Iterate One pi after the other.

Initialize Kubernetes Cluster

  • Once the first Pi has rebooted, SSH to it again, using the static IP adress.
  • ONLY for the first Pi, initialize the Kubernetes cluster with the kube-init.sh script: sudo bash /boot/kube-init.sh
  • After a (looong) while, Kubeadm will print the set of instructions and the "join" token. Write this token persistent on your admin machine (it is valid for 24 hours).
  • Validate the readiness with kubectl get node

Join Kubernetes workers

  • On each other Pi, run the join command provided by kubeadm init .... You can do all at the same time:
    sudo kubeadm join --token <token> <static IP of the master>:6443
  • After a while, the command kubectl get node will report all your nodes as ready (be patient!)

Enable dynamic Loadbalancer with MetalLB

MetalLB provides a configurable and distributed load balancer system (ala ELB/ALB/NLB), with a floating IP on your network, to allow external access: https://metallb.universe.tf/ .

Installation is straight forward:

  • Installation of MetalLB using Kubernetes YAML from the master node:

    kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
  • Configure MetalLB to use Layer-2 load balancing (you might use BGP if your router is compliant - Ref. https://metallb.universe.tf/configuration). As described in https://metallb.universe.tf/configuration/#layer-2-configuration, on the master, create a file named metallb-config.yaml, and file it with the following content (adapting the IP to your network of course):

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 192.168.0.100-192.168.0.150 # Ip range (all must be availables)

Configure your (remote) administraton machine to manage kubernetes

  • On the administration machine, download the file /etc/kubernetes/admin.conf from the master node (using scp).
  • Adapt your kubectl configuration (located in "${HOME}/.kube/config" with the 3 objects: cluster, user, and context.
  • Switch to this newly created context with kubectl config use-context <name you gave to your context>

Initialize Helm with an ARM image

  • Install Helm on your administration machine (ref. https://docs.helm.sh/).
  • Apply this YAML to Kubernetes to initialize the RBAC Roles and Service Accounts:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
  • Initializing Helm means starting a pod with a service named "Tiller" on your cluster. Your need to specify a specialized docker image for running Tiller for ARM architecture. Use the following command:
    helm init --service-account tiller --tiller-image=jessestuart/tiller:v2.9.1 --upgrade
#!/bin/bash
set -eux
# Cleanup
kubeadm reset -f
# Write "hack" script to change liveness probe of the kube-apiserver
cat <<EOF > /root/wait-api.sh
#!/bin/bash
APISERVER_CONF=/etc/kubernetes/manifests/kube-apiserver.yaml
while true
do
if [ -f "\${APISERVER_CONF}" ]
then
echo "= Found file \${APISERVER_CONF}"
echo "== with failureThreshold value: \$(grep failureThreshold \${APISERVER_CONF})."
sed -i 's/failureThreshold: .*/failureThreshold: 60/g' "\${APISERVER_CONF}"
echo "== failureThreshold value changed to: \$(grep failureThreshold \${APISERVER_CONF})."
break
fi
sleep 1
done
echo "= Change done"
EOF
# Run the hack script in background
bash -x /root/wait-api.sh >/var/log/wait-api.log 2>&1 &
# Init the master
kubeadm init --token-ttl=0 --pod-network-cidr=10.244.0.0/16
# Configure local kubectl
mkdir -p "${HOME}/.kube"
cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown "$(id -u):$(id -g)" "${HOME}/.kube/config"
# Install Flannel for network, to have the node in "Ready" state
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# Update cluster configuration
cat <<EOF > ./kubeadm-config.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
nodeStatusReportFrequency: 4s # Default was 1m0
nodeStatusUpdateFrequency: 4s # Default is 10s
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
controllerManager:
extraArgs:
node-monitor-grace-period: 16s
node-monitor-period: 2s
pod-eviction-timeout: 30s
EOF
kubeadm config upload from-file --config=kubeadm-config.yaml
## Wait or reboot once the node is Ready
#!/bin/bash
set -eux
## Upgrade and install admin tooling
rm -f /etc/apt/sources.list.d/hypriot*
apt-get update -q && \
apt-get install -qy \
curl \
ca-certificates \
haveged \
htop \
ntp \
rng-tools \
wiringpi
# Add repo list and install kubeadm
cat <<EOF > /etc/apt/preferences.d/kubectl
Package: kubectl
Pin: version 1.13.*
Pin-Priority: 1000
EOF
cat <<EOF > /etc/apt/preferences.d/kubeadm
Package: kubeadm
Pin: version 1.13.*
Pin-Priority: 1000
EOF
cat <<EOF > /etc/apt/preferences.d/kubelet
Package: kubelet
Pin: version 1.13.*
Pin-Priority: 1000
EOF
curl -sSLO https://packages.cloud.google.com/apt/doc/apt-key.gpg
apt-key add ./apt-key.gpg
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -q && \
apt-get install -qy \
kubeadm
kubeadm config images pull # Preload kubeadm cache
# Prepare background services
systemctl daemon-reload
systemctl enable systemd-resolved
systemctl enable ntp
systemctl restart systemd-resolved
systemctl restart ntp
# Kernel tuning
echo 'net.bridge.bridge-nf-call-iptables=1' | tee -a /etc/sysctl.d/98-rpi.conf
sysctl -p
# Shell Completions
kubeadm completion bash > /etc/bash_completion.d/kubeadm
kubectl completion bash > /etc/bash_completion.d/kubectl
echo "== Finished"
reboot now
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment