Skip to content

Instantly share code, notes, and snippets.

@radu-matei
Last active April 11, 2024 06:46
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save radu-matei/fa659492cd9ddea999ce5a9d789b653d to your computer and use it in GitHub Desktop.
Save radu-matei/fa659492cd9ddea999ce5a9d789b653d to your computer and use it in GitHub Desktop.
Setting up SpinKube on a Raspberry Pi cluster with k3s

This is a 5-node Raspberry Pi 5 cluster, assembled from PicoCluster, with an integrated power source and 8-port network switch and cooling fan.

This means it can be plugged in to power with a single power cable, and you can connect to any of the 5 boards using a single ethernet cable.

image

The goal of this tutorial is to configure the Pi cluster to run Kubernetes. To do so, we need to:

  • install an operating system on all the nodes
  • enable cgroups to run containers on each node
  • configure Kubernetes

Installing an operating system on all the boards

Note: this process could be significantly improved by doing netbooting.

First, we need to download an appropriate operating system image from https://www.raspberrypi.com/software/operating-systems/.

If you need to run Wasm applications using SpinKube (or directly through Spin or Wasmtime), you need to install a 64-bit version of the operating system. I downloaded Raspberry Pi OS Lite 64-bit.

Then, you need to install the Raspberry Pi Imager software, which you can use to flash all SD cards for each Raspberry Pi.

A prerequisite for configuring Kubernetes on the cluster is setting up SSH key authentication for each board (you can follow the instructions here), and optionally, you can also setup headless WiFi access and credentials.

Note: when setting up each board, I setup the names pi0…pi4, which will become the names for each node in the cluster.

image

Once you flashed the SD cards, you need to insert the SD cards in the Pi boards, then power on the cluster.

Depending on whether you setup WiFi access or whether you plug the cluster (or each board individually) to ethernet, you can search your local network for the IP address of each board.

Assuming the boot process was successful, you can use arp-scan (or your favorite network scanning tool) to search for the local network address. For example, this is the output for my local setup:

$ sudo arp-scan --interface=en1 --localnet
Password:
Interface: en1, type: EN10MB
Starting arp-scan 1.10.0 with 512 hosts (https://github.com/royhills/arp-scan)

192.168.129.49  d8:3a:dd:d2:24:67       Raspberry Pi Trading Ltd
192.168.129.50  d8:3a:dd:d2:22:8e       Raspberry Pi Trading Ltd
192.168.129.52  d8:3a:dd:d2:25:1e       Raspberry Pi Trading Ltd
192.168.129.48  d8:3a:dd:d2:23:d4       Raspberry Pi Trading Ltd
192.168.129.51  d8:3a:dd:d2:24:76       Raspberry Pi Trading Ltd

Once you have found the IP addresses of your boards, you can now SSH and find the ethernet IP addresses, which we can use to setup the cluster with.

$ ssh -i ~/path/to/ssh/key pi@192.168.129.49
pi@pi0:~ $ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.2  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 fd49:8b6e:788f:6d04:da3a:ddff:fed2:2466  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::da3a:ddff:fed2:2466  prefixlen 64  scopeid 0x20<link>
        ether d8:3a:dd:d2:24:66  txqueuelen 1000  (Ethernet)
        RX packets 48135  bytes 6775993 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40750  bytes 25060917 (23.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 107

Note: while SSH-ing into each board, enable cgroups:

$ cat /boot/cmdline.txt
console=serial0,115200 console=tty1 root=PARTUUID=f62dd3c1-02 rootfstype=ext4 fsck.repair=yes rootwait cfg80211.ieee80211_regdom=BE cgroup_memory=1 cgroup_enable=memory

Create a devices.json file that contains the hostname and ethernet IP address for each board. For example, this is what my file looks like:

[
	{
		"hostname": "pi0",
		"ip": "192.168.2.2"
	},
	{
		"hostname": "pi1",
		"ip": "192.168.2.5"
	},
	{
		"hostname": "pi2",
		"ip": "192.168.2.6"
	},
	{
		"hostname": "pi3",
		"ip": "192.168.2.3"
	},
	{
		"hostname": "pi4",
		"ip": "192.168.2.4"
	}
]

Note: you will have to set static IP addresses for each board; not doing so might result in the cluster configuration being broken.

Configuring a k3s Kubernetes cluster with k3sup

We can use [k3sup](https://github.com/alexellis/k3sup) a tool written by Alex Ellis to easily configure k3s on my 5 node cluster. Using the devices.json file, we use the k3sup plan command to generate a script that will configure k3s on all nodes:

$ k3sup plan \
  devices.json \
  --user pi \
  --servers 1 \
  --background > bootstrap.sh

The output of that command is the [bootstrap.sh](http://bootstrap.sh) file, which in this case looks like:

#!/bin/sh

echo "Setting up primary server 1"
k3sup install --host 192.168.2.2 \
--user pi \
--cluster \
--local-path kubeconfig \
--context picluster

echo "Fetching the server's node-token into memory"

export NODE_TOKEN=$(k3sup node-token --host 192.168.2.2 --user pi)

echo "Setting up worker: 1"
k3sup join \
--host 192.168.2.5 \
--server-host 192.168.2.2 \
--node-token "$NODE_TOKEN" \
--user pi &

echo "Setting up worker: 2"
k3sup join \
--host 192.168.2.6 \
--server-host 192.168.2.2 \
--node-token "$NODE_TOKEN" \
--user pi &

echo "Setting up worker: 3"
k3sup join \
--host 192.168.2.3 \
--server-host 192.168.2.2 \
--node-token "$NODE_TOKEN" \
--user pi &

echo "Setting up worker: 4"
k3sup join \
--host 192.168.2.4 \
--server-host 192.168.2.2 \
--node-token "$NODE_TOKEN" \
--user pi &

The output of running this command is a Kubeconfig file which allows us to access the cluster:

$ export KUBECONFIG=/path/to/kubeconfig
$ kubectl get nodes
NAME   STATUS   ROLES                       AGE   VERSION
pi0    Ready    control-plane,etcd,master   1m   v1.28.7+k3s1
pi1    Ready    worker                      1m   v1.28.5+k3s1
pi2    Ready    worker                      1m   v1.28.5+k3s1
pi3    Ready    worker                      1m   v1.28.5+k3s1
pi4    Ready    worker                      1m   v1.28.5+k3s1

Configuring SpinKube

We will follow the installation instructions for configuring SpinKube with Helm on our cluster.

Install the CRDs required for SpinKube:

$ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml
$ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
$ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Install cert-manager:

# Cert manager CRDs
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

Install the KWasm operator:

# Add Helm repository if not already done
helm repo add kwasm http://kwasm.sh/kwasm-operator/

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.13.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

Install the Spin operator:

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment