Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
K8s on Raspbian

K8s on (vanilla) Raspbian Lite

Yes - you can create a Kubernetes cluster with Raspberry Pis with the default operating system Raspbian. Carry on using all the tools and packages you're used to with the officially-supported OS.

Pre-reqs:

  • You must use an RPi2 or 3 for Kubernetes
  • I'm assuming you're using wired ethernet (Wi-Fi also works)

Master node setup

  • Flash Raspbian to a fresh SD card.

You can use Etcher.io to burn the SD card.

Before booting set up an empty file called ssh in /boot/ on the SD card.

Use Raspbian Stretch Lite

Update: I previously recommended downloading Raspbian Jessie instead of Stretch. At time of writing (3 Jan 2018) Stretch is now fully compatible.

https://www.raspberrypi.org/downloads/raspbian/

  • Change hostname

Use the raspi-config utility to change the hostname to k8s-master-1 or similar and then reboot.

  • Set a static IP address

It's not fun when your cluste breaks because the IP of your master changed. Let's fix that problem ahead of time:

cat >> /etc/dhcpcd.conf

Paste this block:

profile static_eth0
static ip_address=192.168.0.100/24
static routers=192.168.0.1
static domain_name_servers=8.8.8.8

Hit Control + D.

Change 100 for 101, 102, 103 etc.

You may also need to make a reservation on your router's DHCP table so these addresses don't get given out to other devices on your network.

  • Install Docker

This installs 17.12 or newer.

$ curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker
  • Disable swap

For Kubernetes 1.7 and newer you will get an error if swap space is enabled.

Turn off swap:

$ sudo dphys-swapfile swapoff && \
  sudo dphys-swapfile uninstall && \
  sudo update-rc.d dphys-swapfile remove

This should now show no entries:

$ sudo swapon --summary
  • Edit /boot/cmdline.txt

Add this text at the end of the line, but don't create any new lines:

cgroup_enable=cpuset cgroup_enable=memory

Some people in the comments suggest cgroup_memory=memory should now be: cgroup_memory=1.

Now reboot - do not skip this step.

  • Add repo lists & install kubeadm
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm

I realise this says 'xenial' in the apt listing, don't worry. It still works.

  • You now have two new commands installed:

  • kubeadm - used to create new clusters or join an existing one

  • kubectl - the CLI administration tool for Kubernetes

  • Initialize your master node:

$ sudo kubeadm init --token-ttl=0

We pass in --token-ttl=0 so that the token never expires - do not use this setting in production. The UX for kubeadm means it's currently very hard to get a join token later on after the initial token has expired.

Optionally also pass --apiserver-advertise-address=192.168.0.27 with the IP of the Pi.

Note: This step will take a long time, even up to 15 minutes.

After the init is complete run the snippet given to you on the command-line:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

This step takes the key generated for cluster administration and makes it available in a default location for use with kubectl.

  • Now save your join-token

Your join token is valid for 24 hours, so save it into a text file. Here's an example of mine:

$ kubeadm join --token 9e700f.7dc97f5e3a45c9e5 192.168.0.27:6443 --discovery-token-ca-cert-hash sha256:95cbb9ee5536aa61ec0239d6edd8598af68758308d0a0425848ae1af28859bea
  • Check everything worked:
$ kubectl get pods --namespace=kube-system
NAME                           READY     STATUS    RESTARTS   AGE                
etcd-of-2                      1/1       Running   0          12m                
kube-apiserver-of-2            1/1       Running   2          12m                
kube-controller-manager-of-2   1/1       Running   1          11m                
kube-dns-66ffd5c588-d8292      3/3       Running   0          11m                
kube-proxy-xcj5h               1/1       Running   0          11m                
kube-scheduler-of-2            1/1       Running   0          11m                
weave-net-zz9rz                2/2       Running   0          5m 

You should see the "READY" count showing as 1/1 for all services as above. DNS uses three pods, so you'll see 3/3 for that.

  • Setup networking

Install Weave network driver

$ kubectl apply -f https://git.io/weave-kube-1.6

If you run into an issue use this script instead:

$ kubectl apply -f \
 "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Join other nodes

On the other RPis, repeat everything apart from kubeadm init.

  • Change hostname

Use the raspi-config utility to change the hostname to k8s-worker-1 or similar and then reboot.

  • Join the cluster

Replace the token / IP for the output you got from the master node:

$ sudo kubeadm join --token 1fd0d8.67e7083ed7ec08f3 192.168.0.27:6443

You can now run this on the master:

$ kubectl get nodes
NAME      STATUS     AGE       VERSION
k8s-1     Ready      5m        v1.7.4
k8s-2     Ready      10m       v1.7.4

Deploy a container

function.yml

apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31118
  selector:
    app: markdownrender
---
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
  name: markdownrender
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

Deploy and test:

$ kubectl create -f function.yml
$ curl -4 http://localhost:31118 -d "# test"
<p><h1>test</h1></p>

From a remote machine such as your laptop use the IP address of your Kubernetes master and try the same again.

Start up the dashboard

The dashboard can be useful for visualising the state and health of your system but it does require the equivalent of "root" in the cluster. If you want to proceed you should first run in a ClusterRole from the docs.

echo -n 'apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system' | kubectl apply -f -

This is the development/alternative dashboard which has TLS disabled and is easier to use.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml

You can then find the IP and port via kubectl get svc -n kube-system. To access this from your laptop you will need to use kubectl proxy and navigate to http://localhost:8001/ on the master, or tunnel to this address with ssh.

Remove the test deployment

Now on the Kubernetes master remove the test deployment:

$ kubectl delete -f function.yml

Moving on

Now head back over to the tutorial and deploy OpenFaaS

#!/bin/sh
# This installs the base instructions up to the point of joining / creating a cluster
curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker
sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubeadm
echo Adding " cgroup_enable=cpuset cgroup_memory=1" to /boot/cmdline.txt
sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt
orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_memory=1"
echo $orig | sudo tee /boot/cmdline.txt
echo Please reboot

Use this to setup quickly

# curl -sL \
 https://gist.githubusercontent.com/alexellis/fdbc90de7691a1b9edb545c17da2d975/raw/b04f1e9250c61a8ff554bfe3475b6dd050062484/prep.sh \
 | sudo sh

Lewiscowles1986 commented Oct 12, 2017

This is great. It'd be very cool to have this operate unattended. (or as unattended as possible)

The swapfile turns back on when you reboot unless you

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove

For this line curl localhost:31118 -d "# test" I had to use the full host name. Localhost is still 127.0.0.1 and it doesn't seem to be listening

Owner

alexellis commented Oct 25, 2017

Kubernetes please stop changing every other day 👎

olavt commented Oct 29, 2017

I followed the instructions and got everything installed on a 2x Raspberry PI 3 cluster (1 master and 1 node). But, I have not been able to get the Dashboard up and running.

olavt@k8s-master-1:~ $ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 5h
kubernetes-dashboard ClusterIP 10.104.85.132 443/TCP 4h
olavt@k8s-master-1:~ $ kubectl proxy
Starting to serve on 127.0.0.1:8001

What is the Url I should use from my other computer to connect to the Dashboard?

Owner

alexellis commented Oct 30, 2017

OK for the dashboard you need to run kubectl on your own PC/laptop. Maybe an SSH tunnel would work?

ssh -L 8001:127.0.01:8001 pi@k8s-master-1.local

then try 127.0.0.1:8001 on your local machine

olavt commented Oct 30, 2017

That didn't work for me.

steini commented Nov 3, 2017

First of all thanks for the detailed setup process.

After updating raspbian i ran into the problem that sudo kubeadm join raised the error CGROUPS_MEMORY: missing. The boot option is no longer cgroup_enable=memory but cgroup_memory=1

See https://archlinuxarm.org/forum/viewtopic.php?f=15&t=12086#p57035 and raspberrypi/linux@ba742b5

movingbytes commented Nov 5, 2017

after installation the status of all pods in namespace kube-system is pending except kube-proxy (NodeLost). Any ideas?
Using docker 17.10 and K8S 1.8.2

borrillis commented Nov 15, 2017

My dashboard wouldn't work properly until I did:
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccou nt=kube-system:kubernetes-dashboard

I could get to the dashboard using kubectl proxy and opened the url http://localhost:8001/ui in a browser, but it couldn't get any data from the api.

@alexellis it should be cgroup_memory=1 not cgroup_enable=memory

krystan commented Dec 12, 2017

cgroup_enable=memory seems to be fine under kernel 4.9.35-v7.

Owner

alexellis commented Dec 23, 2017

I've updated the instructions for the newer RPi kernel.

charliesolomon commented Jan 1, 2018

I had to run the "set up networking" step (install weave) in order to get "Running" back from the 3 DNS pods. Before that, they reported "Pending"... move the "set up networking" step before "check everything worked" in your instructions?

teekay commented Jan 4, 2018

I was also only able to get both Master and 1 "slave" node to the Ready status when I first installed the "weave" networking on the master, and only after that joined the worker. K8s version 1.9.

caedev commented Jan 8, 2018

Has anyone experienced an issue kubeadm? I'm getting Illegal instruction when I try to run it.

Running on Raspian Stretch 4.9.59+.

Owner

alexellis commented Jan 8, 2018

@caedev - no, you are definitely using a Raspberry Pi 2 or 3?

caedev commented Jan 8, 2018

Sorry, just realised I was ssh'ing into the wrong pi; this works absolutely fine on my Pi 2. Thanks for writing this @alexellis - much appreciated.

haebler commented Jan 9, 2018

same experience as @charliesolomon, DNS doesn't come up until you install the weave network driver.

Basically change to below:

  • Install network driver kubectl apply -f https://git.io/weave-kube-1.6
  • Check status: kubectl get pods --namespace=kube-system

Note: Be patient on the 2nd step, the weave driver comes up first. Once it is Running DNS goes from Pending to ContainerCreating to Running.

In the dashboard section, you might want to mention the need for rbac: https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges

DazWilkin commented Jan 20, 2018

An excellent guide, thank you!

The instructions are unclear for accessing the cluster remotely but are explained here:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#optional-controlling-your-cluster-from-machines-other-than-the-master

Effectively make a copy on the local machine of the master's /etc/kubernetes/admin.conf perhaps named k8s_pi.conf

Then kubectl --kubeconfig ./k8s_pi.conf get nodes

Or, per your example to create a proxy: kubectl --kubeconfig ./k8s_pi.conf proxy &

To avoid specifying --kubeconfig repeatedly, you can merge the contents of k8s_pi.conf into the default config ~/.kube/config

Follow-up (kubeadm) question: What's the process to shutdown and restart the cluster?

kubeadm reset seems more of a teardown.

What if you'd just like to shut the cluster down correctly to then shutdown the underlying Pis and restart subsequently?

denhamparry commented Jan 29, 2018

Have been playing around with this over the weekend, really enjoying the project!

I hit a block with Kubernetes Dashboard, and realised that I couldn't connect to it via proxy due to it being set as a ClusterIP rather than a NodeIP.

  • Edit kubernetes-dashboard service.
$ kubectl -n kube-system edit service kubernetes-dashboard
  • You should the see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
  • Check port on which Dashboard was exposed.
$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.252.18   <none>        80:30294/TCP   23m
  • Create a proxy to view within your browser
$ ssh -L 8001:127.0.0.1:31707 pi@k8s-master-1.local

Thanks again Alex!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment