Skip to content

Instantly share code, notes, and snippets.

Last active April 11, 2024 14:17
Show Gist options
  • Save alexellis/fdbc90de7691a1b9edb545c17da2d975 to your computer and use it in GitHub Desktop.
Save alexellis/fdbc90de7691a1b9edb545c17da2d975 to your computer and use it in GitHub Desktop.
K8s on Raspbian
# This installs the base instructions up to the point of joining / creating a cluster
curl -sSL | sh && \
sudo usermod pi -aG docker
sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove
curl -s | sudo apt-key add - && \
echo "deb kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubeadm
echo Adding " cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" to /boot/cmdline.txt
sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt
orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
echo $orig | sudo tee /boot/cmdline.txt
echo Please reboot

Use this to setup quickly

# curl -sL \ \
 | sudo sh
Copy link

@andyburgin I followed your instructions, but I can't get the master node running...

The kubeadm init [...] did not finish:

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

I waited for some time until all pod were "Running":

$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   etcd-msh-master                      1/1     Running   0          105s
kube-system   kube-apiserver-msh-master            1/1     Running   5          107s
kube-system   kube-controller-manager-msh-master   1/1     Running   0          115s
kube-system   kube-scheduler-msh-master            1/1     Running   0          76s

(is it possible that kube-dns and kube-proxy are missing?)

Then I applied the two weave-net files you mentioned:

kubectl apply -f
kubectl apply -f "$(kubectl version | base64 | tr -d '\n')"

But the weave-net pod will not become "Running"...

ERROR: logging before flag.Parse: E0120 16:25:32.259195   11085 reflector.go:205] Failed to list *v1.Pod: Get dial tcp i/o timeout
ERROR: logging before flag.Parse: E0120 16:25:32.267598   11085 reflector.go:205] Failed to list *v1.NetworkPolicy: Get dial tcp i/o timeout
ERROR: logging before flag.Parse: E0120 16:25:32.274948   11085 reflector.go:205] Failed to list *v1.Namespace: Get dial tcp i/o timeout seems to be the kubernetes service IP:

$ kubectl get services
kubernetes   ClusterIP    <none>        443/TCP   17m

Copy link

Oooookayy... I finally managed to get it working \o/

I wrote a small bash script that checks for /etc/kubernetes/manifests/kube-apiserver.yaml to update failureThreshold (new value: 100) and initialDelaySeconds (new value: 1080) as soon as the file exists. The new values are much bigger than they need to be, but they allowed my to get my master node up and running! Whenever I tried to change these values this by hand, the kubeadm init ... command failed.

Copy link

ejeklint commented Feb 3, 2019

I just set up a working cluster but couldn't get the master running on an RPI 2. Moved SD card over to a RPI 3 and then kubeadm init ran just fine. The worker node seem to run just fine on the RPI 2.

Copy link

rnbwkat commented Feb 4, 2019

Wondering if anyone has gotten helm/tiller working in this configuration?

Copy link

@rnbwkat I got it working but I had to specify a different tiller image, one which was compatible with ARM. The command I used was:

helm init --service-account tiller --tiler-image=jessestuart/tiller:v2.9.0

Copy link

oprwiz commented Feb 10, 2019

@janpieper. I’ve run into the “node not found”. looking through all the comments I was going to follow the save steps you did. I wonder what versions of k8s and docker you’ve installed

Copy link

I tried to setup the cluster following the steps described but still didn't get a succesful kubeadm init. I tried different versions of k8s and docker. Is there somebody who has the steps to get 1.13-3 working with 18.09.0

Copy link

@janpieper can you share the script?

Copy link

sinfloodmusic commented Jul 11, 2019

@janpieper steps worked up until the point everyone mentioned, and rather than the script that polls and zaps the config, I found you can do the same (after the initial failure) by running these commands (lifted from this issue kubernetes/kubeadm#1380)

sudo kubeadm reset
sudo kubeadm init phase certs all
sudo kubeadm init phase kubeconfig all
sudo kubeadm init phase control-plane all --pod-network-cidr
sudo sed -i 's/initialDelaySeconds: [0-9][0-9]/initialDelaySeconds: 240/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sudo sed -i 's/failureThreshold: [0-9]/failureThreshold: 18/g'             /etc/kubernetes/manifests/kube-apiserver.yaml
sudo sed -i 's/timeoutSeconds: [0-9][0-9]/timeoutSeconds: 20/g'            /etc/kubernetes/manifests/kube-apiserver.yaml
sudo kubeadm init --v=1 --skip-phases=certs,kubeconfig,control-plane --ignore-preflight-errors=all --pod-network-cidr

Then I installed flannel.

kubectl apply -f

Something that threw me off was the shell demo that Kubernetes provides works fine (kubectl apply -f docs here:

But it fails when doing a deployment of nginx from their example here:

Turns out the nginx image isn't compatible with ARM, once I changed the image to a pi supported image (tobi312/rpi-nginx
) it worked fine! Thanks to everyone here, I finally got my pi cluster going.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment