Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
K8s on Raspbian

Kubernetes on (vanilla) Raspbian Lite

Yes - you can create a Kubernetes cluster with Raspberry Pis with the default operating system called Raspbian. This means you can carry on using all the tools and packages you're used to with the officially-supported OS.

Pre-reqs:

  • You must use an RPi 2 or 3 for use with Kubernetes
  • I'm assuming you're using wired ethernet (Wi-Fi also works, but it's not recommended)

Master node setup

  • Flash Raspbian to a fresh SD card.

You can use Etcher.io to burn the SD card.

Before booting set up an empty file called ssh in /boot/ on the SD card.

Use Raspbian Stretch Lite

Update: I previously recommended downloading Raspbian Jessie instead of Stretch. At time of writing (3 Jan 2018) Stretch is now fully compatible.

https://www.raspberrypi.org/downloads/raspbian/

  • Change hostname

Use the raspi-config utility to change the hostname to k8s-master-1 or similar and then reboot.

  • Set a static IP address

It's not fun when your cluste breaks because the IP of your master changed. Let's fix that problem ahead of time:

cat >> /etc/dhcpcd.conf

Paste this block:

profile static_eth0
static ip_address=192.168.0.100/24
static routers=192.168.0.1
static domain_name_servers=8.8.8.8

Hit Control + D.

Change 100 for 101, 102, 103 etc.

You may also need to make a reservation on your router's DHCP table so these addresses don't get given out to other devices on your network.

  • Install Docker

This installs 17.12 or newer.

$ curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker
  • Disable swap

For Kubernetes 1.7 and newer you will get an error if swap space is enabled.

Turn off swap:

$ sudo dphys-swapfile swapoff && \
  sudo dphys-swapfile uninstall && \
  sudo update-rc.d dphys-swapfile remove

This should now show no entries:

$ sudo swapon --summary
  • Edit /boot/cmdline.txt

Add this text at the end of the line, but don't create any new lines:

cgroup_enable=cpuset cgroup_enable=memory

Some people in the comments suggest cgroup_memory=memory should now be: cgroup_memory=1.

Now reboot - do not skip this step.

  • Add repo lists & install kubeadm
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm

I realise this says 'xenial' in the apt listing, don't worry. It still works.

  • You now have two new commands installed:

  • kubeadm - used to create new clusters or join an existing one

  • kubectl - the CLI administration tool for Kubernetes

  • Initialize your master node:

$ sudo kubeadm init --token-ttl=0

We pass in --token-ttl=0 so that the token never expires - do not use this setting in production. The UX for kubeadm means it's currently very hard to get a join token later on after the initial token has expired.

Optionally also pass --apiserver-advertise-address=192.168.0.27 with the IP of the Pi.

Note: This step will take a long time, even up to 15 minutes.

After the init is complete run the snippet given to you on the command-line:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

This step takes the key generated for cluster administration and makes it available in a default location for use with kubectl.

  • Now save your join-token

Your join token is valid for 24 hours, so save it into a text file. Here's an example of mine:

$ kubeadm join --token 9e700f.7dc97f5e3a45c9e5 192.168.0.27:6443 --discovery-token-ca-cert-hash sha256:95cbb9ee5536aa61ec0239d6edd8598af68758308d0a0425848ae1af28859bea
  • Check everything worked:
$ kubectl get pods --namespace=kube-system
NAME                           READY     STATUS    RESTARTS   AGE                
etcd-of-2                      1/1       Running   0          12m                
kube-apiserver-of-2            1/1       Running   2          12m                
kube-controller-manager-of-2   1/1       Running   1          11m                
kube-dns-66ffd5c588-d8292      3/3       Running   0          11m                
kube-proxy-xcj5h               1/1       Running   0          11m                
kube-scheduler-of-2            1/1       Running   0          11m                
weave-net-zz9rz                2/2       Running   0          5m 

You should see the "READY" count showing as 1/1 for all services as above. DNS uses three pods, so you'll see 3/3 for that.

  • Setup networking

Install Weave network driver

$ kubectl apply -f \
 "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Join other nodes

On the other RPis, repeat everything apart from kubeadm init.

  • Change hostname

Use the raspi-config utility to change the hostname to k8s-worker-1 or similar and then reboot.

  • Join the cluster

Replace the token / IP for the output you got from the master node:

$ sudo kubeadm join --token 1fd0d8.67e7083ed7ec08f3 192.168.0.27:6443

You can now run this on the master:

$ kubectl get nodes
NAME      STATUS     AGE       VERSION
k8s-1     Ready      5m        v1.7.4
k8s-2     Ready      10m       v1.7.4

Deploy a container

This container will expose a HTTP port and convert Markdown to HTML. Just post a body to it via curl - follow the instructions below.

function.yml

apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31118
  selector:
    app: markdownrender
---
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
  name: markdownrender
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

Deploy and test:

$ kubectl create -f function.yml

Once the Docker image has been pulled from the hub and the Pod is running you can access it via curl:

$ curl -4 http://127.0.0.1:31118 -d "# test"
<p><h1>test</h1></p>

If you want to call the service from a remote machine such as your laptop then use the IP address of your Kubernetes master node and try the same again.

Start up the dashboard

The dashboard can be useful for visualising the state and health of your system but it does require the equivalent of "root" in the cluster. If you want to proceed you should first run in a ClusterRole from the docs.

echo -n 'apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system' | kubectl apply -f -

This is the development/alternative dashboard which has TLS disabled and is easier to use.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml

You can then find the IP and port via kubectl get svc -n kube-system. To access this from your laptop you will need to use kubectl proxy and navigate to http://localhost:8001/ on the master, or tunnel to this address with ssh.

Remove the test deployment

Now on the Kubernetes master remove the test deployment:

$ kubectl delete -f function.yml

Moving on

Now head back over to the tutorial and deploy OpenFaaS to put the cluster through its paces.

#!/bin/sh
# This installs the base instructions up to the point of joining / creating a cluster
curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker
sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubeadm
echo Adding " cgroup_enable=cpuset cgroup_memory=1" to /boot/cmdline.txt
sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt
orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_memory=1"
echo $orig | sudo tee /boot/cmdline.txt
echo Please reboot

Use this to setup quickly

# curl -sL \
 https://gist.githubusercontent.com/alexellis/fdbc90de7691a1b9edb545c17da2d975/raw/b04f1e9250c61a8ff554bfe3475b6dd050062484/prep.sh \
 | sudo sh
@Lewiscowles1986

This comment has been minimized.

Show comment Hide comment
@Lewiscowles1986

Lewiscowles1986 Oct 12, 2017

This is great. It'd be very cool to have this operate unattended. (or as unattended as possible)

Lewiscowles1986 commented Oct 12, 2017

This is great. It'd be very cool to have this operate unattended. (or as unattended as possible)

@shanselman

This comment has been minimized.

Show comment Hide comment

shanselman commented Oct 25, 2017

@shanselman

This comment has been minimized.

Show comment Hide comment
@shanselman

shanselman Oct 25, 2017

The swapfile turns back on when you reboot unless you

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove

The swapfile turns back on when you reboot unless you

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
@shanselman

This comment has been minimized.

Show comment Hide comment
@shanselman

shanselman Oct 25, 2017

For this line curl localhost:31118 -d "# test" I had to use the full host name. Localhost is still 127.0.0.1 and it doesn't seem to be listening

For this line curl localhost:31118 -d "# test" I had to use the full host name. Localhost is still 127.0.0.1 and it doesn't seem to be listening

@alexellis

This comment has been minimized.

Show comment Hide comment
@alexellis

alexellis Oct 25, 2017

Kubernetes please stop changing every other day 👎

Owner

alexellis commented Oct 25, 2017

Kubernetes please stop changing every other day 👎

@olavt

This comment has been minimized.

Show comment Hide comment
@olavt

olavt Oct 29, 2017

I followed the instructions and got everything installed on a 2x Raspberry PI 3 cluster (1 master and 1 node). But, I have not been able to get the Dashboard up and running.

olavt@k8s-master-1:~ $ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 5h
kubernetes-dashboard ClusterIP 10.104.85.132 443/TCP 4h
olavt@k8s-master-1:~ $ kubectl proxy
Starting to serve on 127.0.0.1:8001

What is the Url I should use from my other computer to connect to the Dashboard?

olavt commented Oct 29, 2017

I followed the instructions and got everything installed on a 2x Raspberry PI 3 cluster (1 master and 1 node). But, I have not been able to get the Dashboard up and running.

olavt@k8s-master-1:~ $ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 5h
kubernetes-dashboard ClusterIP 10.104.85.132 443/TCP 4h
olavt@k8s-master-1:~ $ kubectl proxy
Starting to serve on 127.0.0.1:8001

What is the Url I should use from my other computer to connect to the Dashboard?

@alexellis

This comment has been minimized.

Show comment Hide comment
@alexellis

alexellis Oct 30, 2017

OK for the dashboard you need to run kubectl on your own PC/laptop. Maybe an SSH tunnel would work?

ssh -L 8001:127.0.01:8001 pi@k8s-master-1.local

then try 127.0.0.1:8001 on your local machine

Owner

alexellis commented Oct 30, 2017

OK for the dashboard you need to run kubectl on your own PC/laptop. Maybe an SSH tunnel would work?

ssh -L 8001:127.0.01:8001 pi@k8s-master-1.local

then try 127.0.0.1:8001 on your local machine

@olavt

This comment has been minimized.

Show comment Hide comment
@olavt

olavt Oct 30, 2017

That didn't work for me.

olavt commented Oct 30, 2017

That didn't work for me.

@steini

This comment has been minimized.

Show comment Hide comment
@steini

steini Nov 3, 2017

First of all thanks for the detailed setup process.

After updating raspbian i ran into the problem that sudo kubeadm join raised the error CGROUPS_MEMORY: missing. The boot option is no longer cgroup_enable=memory but cgroup_memory=1

See https://archlinuxarm.org/forum/viewtopic.php?f=15&t=12086#p57035 and raspberrypi/linux@ba742b5

steini commented Nov 3, 2017

First of all thanks for the detailed setup process.

After updating raspbian i ran into the problem that sudo kubeadm join raised the error CGROUPS_MEMORY: missing. The boot option is no longer cgroup_enable=memory but cgroup_memory=1

See https://archlinuxarm.org/forum/viewtopic.php?f=15&t=12086#p57035 and raspberrypi/linux@ba742b5

@movingbytes

This comment has been minimized.

Show comment Hide comment
@movingbytes

movingbytes Nov 5, 2017

after installation the status of all pods in namespace kube-system is pending except kube-proxy (NodeLost). Any ideas?
Using docker 17.10 and K8S 1.8.2

movingbytes commented Nov 5, 2017

after installation the status of all pods in namespace kube-system is pending except kube-proxy (NodeLost). Any ideas?
Using docker 17.10 and K8S 1.8.2

@borrillis

This comment has been minimized.

Show comment Hide comment
@borrillis

borrillis Nov 15, 2017

My dashboard wouldn't work properly until I did:
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccou nt=kube-system:kubernetes-dashboard

I could get to the dashboard using kubectl proxy and opened the url http://localhost:8001/ui in a browser, but it couldn't get any data from the api.

borrillis commented Nov 15, 2017

My dashboard wouldn't work properly until I did:
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccou nt=kube-system:kubernetes-dashboard

I could get to the dashboard using kubectl proxy and opened the url http://localhost:8001/ui in a browser, but it couldn't get any data from the api.

@francis2211

This comment has been minimized.

Show comment Hide comment
@francis2211

francis2211 Nov 22, 2017

@alexellis it should be cgroup_memory=1 not cgroup_enable=memory

@alexellis it should be cgroup_memory=1 not cgroup_enable=memory

@krystan

This comment has been minimized.

Show comment Hide comment
@krystan

krystan Dec 12, 2017

cgroup_enable=memory seems to be fine under kernel 4.9.35-v7.

krystan commented Dec 12, 2017

cgroup_enable=memory seems to be fine under kernel 4.9.35-v7.

@alexellis

This comment has been minimized.

Show comment Hide comment
@alexellis

alexellis Dec 23, 2017

I've updated the instructions for the newer RPi kernel.

Owner

alexellis commented Dec 23, 2017

I've updated the instructions for the newer RPi kernel.

@charliesolomon

This comment has been minimized.

Show comment Hide comment
@charliesolomon

charliesolomon Jan 1, 2018

I had to run the "set up networking" step (install weave) in order to get "Running" back from the 3 DNS pods. Before that, they reported "Pending"... move the "set up networking" step before "check everything worked" in your instructions?

charliesolomon commented Jan 1, 2018

I had to run the "set up networking" step (install weave) in order to get "Running" back from the 3 DNS pods. Before that, they reported "Pending"... move the "set up networking" step before "check everything worked" in your instructions?

@teekay

This comment has been minimized.

Show comment Hide comment
@teekay

teekay Jan 4, 2018

I was also only able to get both Master and 1 "slave" node to the Ready status when I first installed the "weave" networking on the master, and only after that joined the worker. K8s version 1.9.

teekay commented Jan 4, 2018

I was also only able to get both Master and 1 "slave" node to the Ready status when I first installed the "weave" networking on the master, and only after that joined the worker. K8s version 1.9.

@evnsio

This comment has been minimized.

Show comment Hide comment
@evnsio

evnsio Jan 8, 2018

Has anyone experienced an issue kubeadm? I'm getting Illegal instruction when I try to run it.

Running on Raspian Stretch 4.9.59+.

evnsio commented Jan 8, 2018

Has anyone experienced an issue kubeadm? I'm getting Illegal instruction when I try to run it.

Running on Raspian Stretch 4.9.59+.

@alexellis

This comment has been minimized.

Show comment Hide comment
@alexellis

alexellis Jan 8, 2018

@caedev - no, you are definitely using a Raspberry Pi 2 or 3?

Owner

alexellis commented Jan 8, 2018

@caedev - no, you are definitely using a Raspberry Pi 2 or 3?

@evnsio

This comment has been minimized.

Show comment Hide comment
@evnsio

evnsio Jan 8, 2018

Sorry, just realised I was ssh'ing into the wrong pi; this works absolutely fine on my Pi 2. Thanks for writing this @alexellis - much appreciated.

evnsio commented Jan 8, 2018

Sorry, just realised I was ssh'ing into the wrong pi; this works absolutely fine on my Pi 2. Thanks for writing this @alexellis - much appreciated.

@haebler

This comment has been minimized.

Show comment Hide comment
@haebler

haebler Jan 9, 2018

same experience as @charliesolomon, DNS doesn't come up until you install the weave network driver.

Basically change to below:

  • Install network driver kubectl apply -f https://git.io/weave-kube-1.6
  • Check status: kubectl get pods --namespace=kube-system

Note: Be patient on the 2nd step, the weave driver comes up first. Once it is Running DNS goes from Pending to ContainerCreating to Running.

haebler commented Jan 9, 2018

same experience as @charliesolomon, DNS doesn't come up until you install the weave network driver.

Basically change to below:

  • Install network driver kubectl apply -f https://git.io/weave-kube-1.6
  • Check status: kubectl get pods --namespace=kube-system

Note: Be patient on the 2nd step, the weave driver comes up first. Once it is Running DNS goes from Pending to ContainerCreating to Running.

@chris-short

This comment has been minimized.

Show comment Hide comment
@chris-short

chris-short Jan 13, 2018

In the dashboard section, you might want to mention the need for rbac: https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges

In the dashboard section, you might want to mention the need for rbac: https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges

@DazWilkin

This comment has been minimized.

Show comment Hide comment
@DazWilkin

DazWilkin Jan 20, 2018

An excellent guide, thank you!

The instructions are unclear for accessing the cluster remotely but are explained here:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#optional-controlling-your-cluster-from-machines-other-than-the-master

Effectively make a copy on the local machine of the master's /etc/kubernetes/admin.conf perhaps named k8s_pi.conf

Then kubectl --kubeconfig ./k8s_pi.conf get nodes

Or, per your example to create a proxy: kubectl --kubeconfig ./k8s_pi.conf proxy &

To avoid specifying --kubeconfig repeatedly, you can merge the contents of k8s_pi.conf into the default config ~/.kube/config

DazWilkin commented Jan 20, 2018

An excellent guide, thank you!

The instructions are unclear for accessing the cluster remotely but are explained here:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#optional-controlling-your-cluster-from-machines-other-than-the-master

Effectively make a copy on the local machine of the master's /etc/kubernetes/admin.conf perhaps named k8s_pi.conf

Then kubectl --kubeconfig ./k8s_pi.conf get nodes

Or, per your example to create a proxy: kubectl --kubeconfig ./k8s_pi.conf proxy &

To avoid specifying --kubeconfig repeatedly, you can merge the contents of k8s_pi.conf into the default config ~/.kube/config

@DazWilkin

This comment has been minimized.

Show comment Hide comment
@DazWilkin

DazWilkin Jan 20, 2018

Follow-up (kubeadm) question: What's the process to shutdown and restart the cluster?

kubeadm reset seems more of a teardown.

What if you'd just like to shut the cluster down correctly to then shutdown the underlying Pis and restart subsequently?

Follow-up (kubeadm) question: What's the process to shutdown and restart the cluster?

kubeadm reset seems more of a teardown.

What if you'd just like to shut the cluster down correctly to then shutdown the underlying Pis and restart subsequently?

@denhamparry

This comment has been minimized.

Show comment Hide comment
@denhamparry

denhamparry Jan 29, 2018

Have been playing around with this over the weekend, really enjoying the project!

I hit a block with Kubernetes Dashboard, and realised that I couldn't connect to it via proxy due to it being set as a ClusterIP rather than a NodeIP.

  • Edit kubernetes-dashboard service.
$ kubectl -n kube-system edit service kubernetes-dashboard
  • You should the see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
  • Check port on which Dashboard was exposed.
$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.252.18   <none>        80:30294/TCP   23m
  • Create a proxy to view within your browser
$ ssh -L 8001:127.0.0.1:31707 pi@k8s-master-1.local

Thanks again Alex!

denhamparry commented Jan 29, 2018

Have been playing around with this over the weekend, really enjoying the project!

I hit a block with Kubernetes Dashboard, and realised that I couldn't connect to it via proxy due to it being set as a ClusterIP rather than a NodeIP.

  • Edit kubernetes-dashboard service.
$ kubectl -n kube-system edit service kubernetes-dashboard
  • You should the see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
  • Check port on which Dashboard was exposed.
$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.252.18   <none>        80:30294/TCP   23m
  • Create a proxy to view within your browser
$ ssh -L 8001:127.0.0.1:31707 pi@k8s-master-1.local

Thanks again Alex!

@yosephbuitrago

This comment has been minimized.

Show comment Hide comment
@yosephbuitrago

yosephbuitrago Mar 2, 2018

Hi, Alex, thank for share this tutorial. I builded a raspberry pi cluster and is running kubernetes and OpenFaas as expected it. the only thing is that the auto-scaling don't in OpenfaaS does work! on my computer works but it does work in the cluster!

Do I have to change something in the .yml files? I check them but they look the same.

Hi, Alex, thank for share this tutorial. I builded a raspberry pi cluster and is running kubernetes and OpenFaas as expected it. the only thing is that the auto-scaling don't in OpenfaaS does work! on my computer works but it does work in the cluster!

Do I have to change something in the .yml files? I check them but they look the same.

@johndcollins

This comment has been minimized.

Show comment Hide comment
@johndcollins

johndcollins Mar 13, 2018

I had to add both cgroup_memory=memory AND cgroup_memory=1 to the cmdline.txt to get it to work.

I had to add both cgroup_memory=memory AND cgroup_memory=1 to the cmdline.txt to get it to work.

@bilalAchahbar

This comment has been minimized.

Show comment Hide comment
@bilalAchahbar

bilalAchahbar Mar 21, 2018

Great and very understandable post !!
I've set the kubernetes dashboard through the Nodeport and can access it on my host but the certificates still give a lot of issues.
Is it possible to use Let's encrypt for the kubernetes dashboard ?
As i am new to the concept of certificates through websites can anyone point me how i can do this through an authomatic service like let's encrypt.

Great and very understandable post !!
I've set the kubernetes dashboard through the Nodeport and can access it on my host but the certificates still give a lot of issues.
Is it possible to use Let's encrypt for the kubernetes dashboard ?
As i am new to the concept of certificates through websites can anyone point me how i can do this through an authomatic service like let's encrypt.

@Jickelsen

This comment has been minimized.

Show comment Hide comment
@Jickelsen

Jickelsen Apr 1, 2018

Thanks for the fantastic guide, I had great fun learning about all these topics in practice over a weekend. As a switch I'm having great success with the 5-port D-Link DGS-1005D, newer versions of which use mini-USB for power.

I had issues getting Weave to work on Raspbian Stretch and the Pi3 B+. Shortly after running
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
the master and connected nodes would reboot unexpectedly, and would leave the cluster in an error state.
I ended up using flannel:

  • Use --pod-network-cidr=10.244.0.0/16 when initializing the cluster
    $ sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=<internal master ip> --pod-network-cidr=10.244.0.0/16
  • Install flannel with
    $ curl -sSL https://rawgit.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -

I also managed to set up the master as a router, with Wifi on the WAN side, using the steps in this particular post https://www.raspberrypi.org/forums/viewtopic.php?f=36&t=132674&start=50#p1252309

Thanks for the fantastic guide, I had great fun learning about all these topics in practice over a weekend. As a switch I'm having great success with the 5-port D-Link DGS-1005D, newer versions of which use mini-USB for power.

I had issues getting Weave to work on Raspbian Stretch and the Pi3 B+. Shortly after running
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
the master and connected nodes would reboot unexpectedly, and would leave the cluster in an error state.
I ended up using flannel:

  • Use --pod-network-cidr=10.244.0.0/16 when initializing the cluster
    $ sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=<internal master ip> --pod-network-cidr=10.244.0.0/16
  • Install flannel with
    $ curl -sSL https://rawgit.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -

I also managed to set up the master as a router, with Wifi on the WAN side, using the steps in this particular post https://www.raspberrypi.org/forums/viewtopic.php?f=36&t=132674&start=50#p1252309

@DerfOh

This comment has been minimized.

Show comment Hide comment
@DerfOh

DerfOh Apr 7, 2018

Thanks @Jickelsen I had to do the same.
In addition to that I also my nodes stuck in a not ready state due to the following error:
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Fixed this by removing KUBELET_NETWORK_ARGS from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf then rebooting according to this issue: kubernetes/kubernetes#38653

I was then able to run
curl -sSL https://rawgit.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
without issue.

DerfOh commented Apr 7, 2018

Thanks @Jickelsen I had to do the same.
In addition to that I also my nodes stuck in a not ready state due to the following error:
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Fixed this by removing KUBELET_NETWORK_ARGS from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf then rebooting according to this issue: kubernetes/kubernetes#38653

I was then able to run
curl -sSL https://rawgit.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
without issue.

@exp0nge

This comment has been minimized.

Show comment Hide comment
@exp0nge

exp0nge Apr 7, 2018

I can't seem to get init past [init] This might take a minute or longer if the control plane images have to be pulled.. There's so many issues on kubernetes/kubadm about this. I used a fresh install of rasbian lite (march 3rd update). Anyone else get this or know a workaround?

exp0nge commented Apr 7, 2018

I can't seem to get init past [init] This might take a minute or longer if the control plane images have to be pulled.. There's so many issues on kubernetes/kubadm about this. I used a fresh install of rasbian lite (march 3rd update). Anyone else get this or know a workaround?

@rashray

This comment has been minimized.

Show comment Hide comment
@rashray

rashray Apr 8, 2018

Thank You Alex. Very detailed steps. I am using a b plus Pi as a master. Any idea why the Pi goes dead slow on initiating the Kube master.

rashray commented Apr 8, 2018

Thank You Alex. Very detailed steps. I am using a b plus Pi as a master. Any idea why the Pi goes dead slow on initiating the Kube master.

@micedwards

This comment has been minimized.

Show comment Hide comment
@micedwards

micedwards Apr 9, 2018

Thanks @Jickelsen & @DerfOh! I spent all my spare time in the last three weeks trying to get kubernetes to work again. The gist worked great at Xmas but now once you get weavenet up on the node & synced to the master, both crash with an oops:
kernel:[ 4286.584219] Internal error: Oops: 80000007 [#1] SMP ARM
kernel:[ 4287.037510] Process weaver (pid: 13327, stack limit = 0x9bb12210)
kernel:[ 4287.059886] Stack: (0x9bb139f0 to 0x9bb14000)
kernel:[ 4287.081698] 39e0: 00000000 00000000 5001a8c0 9bb13a88
kernel:[ 4287.125181] 3a00: 0000801a 0000db84 9bab4150 9bab4118 9bb13d2c 7f63bad0 00000001 9bb13a5c
Finally I can finish writing my ansible play-book to automate the whole thing.

Thanks @Jickelsen & @DerfOh! I spent all my spare time in the last three weeks trying to get kubernetes to work again. The gist worked great at Xmas but now once you get weavenet up on the node & synced to the master, both crash with an oops:
kernel:[ 4286.584219] Internal error: Oops: 80000007 [#1] SMP ARM
kernel:[ 4287.037510] Process weaver (pid: 13327, stack limit = 0x9bb12210)
kernel:[ 4287.059886] Stack: (0x9bb139f0 to 0x9bb14000)
kernel:[ 4287.081698] 39e0: 00000000 00000000 5001a8c0 9bb13a88
kernel:[ 4287.125181] 3a00: 0000801a 0000db84 9bab4150 9bab4118 9bb13d2c 7f63bad0 00000001 9bb13a5c
Finally I can finish writing my ansible play-book to automate the whole thing.

@carlosroman

This comment has been minimized.

Show comment Hide comment
@carlosroman

carlosroman Apr 10, 2018

I've had strange issues with getting weavenet running

NAMESPACE     NAME                                      READY     STATUS              RESTARTS   AGE
kube-system   weave-net-8t7zd                           2/2       Running             494        1d
kube-system   weave-net-gpcnj                           1/2       CrashLoopBackOff    417        1d
kube-system   weave-net-m7tnn                           1/2       ImageInspectError   0          1d
kube-system   weave-net-qmjwk                           1/2       ImageInspectError   0          1d
kube-system   weave-net-rvwpj                           2/2       Running             534        1d

Still debuging it but it has been a fun learning experience getting K8s running on a Raspberry Pi cluster.

@micedwards, I ended up writting an ansible playbook as kept rebuilding my cluster to see why weave kept crashing. Wrote it after running kubeadm reset on the master accidently or on a node. Now have a playbook that sets up my cluster and adds nodes to it as well. Any improvements would be great, https://github.com/carlosroman/ansible-k8s-raspberry-playbook.

I've had strange issues with getting weavenet running

NAMESPACE     NAME                                      READY     STATUS              RESTARTS   AGE
kube-system   weave-net-8t7zd                           2/2       Running             494        1d
kube-system   weave-net-gpcnj                           1/2       CrashLoopBackOff    417        1d
kube-system   weave-net-m7tnn                           1/2       ImageInspectError   0          1d
kube-system   weave-net-qmjwk                           1/2       ImageInspectError   0          1d
kube-system   weave-net-rvwpj                           2/2       Running             534        1d

Still debuging it but it has been a fun learning experience getting K8s running on a Raspberry Pi cluster.

@micedwards, I ended up writting an ansible playbook as kept rebuilding my cluster to see why weave kept crashing. Wrote it after running kubeadm reset on the master accidently or on a node. Now have a playbook that sets up my cluster and adds nodes to it as well. Any improvements would be great, https://github.com/carlosroman/ansible-k8s-raspberry-playbook.

@ScubaJimmer

This comment has been minimized.

Show comment Hide comment
@ScubaJimmer

ScubaJimmer Apr 12, 2018

Good Evening.

I have been having trouble getting kubernetes+docker running as a 2 RPI cluster. My master node continues to reboot. I followed all the steps above to configure two fresh nodes, except I used my router to establish a static IP for my master and worker node. Interestingly my worker node seems stable so far right now. In previous attempts, when I had set up 4 additional nodes they too became unstable.
The master node was stable before I joined my first worker node

Docker version: 18.03.0-ce, build 0520e24
Kubernetes version : 1.10

Master node:

pi@k8boss1:~ $ kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE
etcd-k8boss1 1/1 Running 33 1d
kube-apiserver-k8boss1 1/1 Running 34 1d
kube-controller-manager-k8boss1 1/1 Running 34 1d
kube-dns-686d6fb9c-hwxxw 0/3 Error 0 1d
kube-proxy-8v8z7 0/1 Error 33 1d
kube-proxy-dgqxp 1/1 Running 0 1h
kube-scheduler-k8boss1 1/1 Running 34 1d
weave-net-ggxwp 2/2 Running 0 1h
weave-net-l7xsl 0/2 Error 71 1d

pi@k8boss1:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8boss1 Ready master 1d v1.10.0
k8worker1 Ready 1h v1.10.0

pi@k8boss1:~ $ uptime
01:48:50 up 0 min, 1 user, load average: 1.37, 0.41, 0.14
pi@k8boss1:~ $

Worker:
pi@k8worker1:~ $ uptime
01:49:35 up 1:58, 1 user, load average: 0.11, 0.21, 0.19
pi@k8worker1:~ $

Any thoughts?

ScubaJimmer commented Apr 12, 2018

Good Evening.

I have been having trouble getting kubernetes+docker running as a 2 RPI cluster. My master node continues to reboot. I followed all the steps above to configure two fresh nodes, except I used my router to establish a static IP for my master and worker node. Interestingly my worker node seems stable so far right now. In previous attempts, when I had set up 4 additional nodes they too became unstable.
The master node was stable before I joined my first worker node

Docker version: 18.03.0-ce, build 0520e24
Kubernetes version : 1.10

Master node:

pi@k8boss1:~ $ kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE
etcd-k8boss1 1/1 Running 33 1d
kube-apiserver-k8boss1 1/1 Running 34 1d
kube-controller-manager-k8boss1 1/1 Running 34 1d
kube-dns-686d6fb9c-hwxxw 0/3 Error 0 1d
kube-proxy-8v8z7 0/1 Error 33 1d
kube-proxy-dgqxp 1/1 Running 0 1h
kube-scheduler-k8boss1 1/1 Running 34 1d
weave-net-ggxwp 2/2 Running 0 1h
weave-net-l7xsl 0/2 Error 71 1d

pi@k8boss1:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8boss1 Ready master 1d v1.10.0
k8worker1 Ready 1h v1.10.0

pi@k8boss1:~ $ uptime
01:48:50 up 0 min, 1 user, load average: 1.37, 0.41, 0.14
pi@k8boss1:~ $

Worker:
pi@k8worker1:~ $ uptime
01:49:35 up 1:58, 1 user, load average: 0.11, 0.21, 0.19
pi@k8worker1:~ $

Any thoughts?

@peterkingsbury

This comment has been minimized.

Show comment Hide comment
@peterkingsbury

peterkingsbury Apr 19, 2018

On Raspbian Stretch Lite, the installation halts during the master setup phase (sudo kubeadm init --token-ttl=0) with the following output:

[init] This might take a minute or longer if the control plane images have to be pulled.

I found it necessary to install Kubernetes 1.9.6:

sudo apt-get install -y kubeadm=1.9.6-00 kubectl=1.9.6-00 kubelet=1.9.6-00

Took 552.013509 seconds to complete, but it's up and running now!

Thanks for a great tutorial!

On Raspbian Stretch Lite, the installation halts during the master setup phase (sudo kubeadm init --token-ttl=0) with the following output:

[init] This might take a minute or longer if the control plane images have to be pulled.

I found it necessary to install Kubernetes 1.9.6:

sudo apt-get install -y kubeadm=1.9.6-00 kubectl=1.9.6-00 kubelet=1.9.6-00

Took 552.013509 seconds to complete, but it's up and running now!

Thanks for a great tutorial!

@danielvaughan

This comment has been minimized.

Show comment Hide comment
@danielvaughan

danielvaughan Apr 21, 2018

I am running into the same problems as @carlosroman and @micedwards after applying weave on a 4 RPi 3 cluster:

Raspbian GNU/Linux 9 (stretch)
Docker version 18.04.0-ce, build 3d479c0
Kubernetes v1.10.1

pi@k8s-master-1:~ $ kubectl get pods --namespace=kube-system
NAME                                   READY     STATUS              RESTARTS   AGE
etcd-k8s-master-1                      1/1       Running             22         10h
kube-apiserver-k8s-master-1            1/1       Running             39         10h
kube-controller-manager-k8s-master-1   1/1       Running             13         10h
kube-dns-686d6fb9c-qn2mp               0/3       Pending             0          10h
kube-proxy-6dlz4                       1/1       Running             11         9h
kube-proxy-7s977                       1/1       Running             2          9h
kube-proxy-q7jlh                       1/1       Running             11         10h
kube-proxy-qdmp7                       1/1       Running             2          9h
kube-scheduler-k8s-master-1            1/1       Running             13         10h
weave-net-5scxb                        2/2       Running             1          2m
weave-net-5vxzw                        1/2       CrashLoopBackOff    4          2m
weave-net-jmlzc                        1/2       ImageInspectError   0          2m
weave-net-xc2f8                        1/2       ImageInspectError   1          2m
pi@k8s-master-1:~ $
Message from syslogd@k8s-master-1 at Apr 22 08:04:14 ...
 kernel:[  155.252476] Internal error: Oops: 80000007 [#1] SMP ARM

I am having more luck with flannel

sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=10.1.1.200 --pod-network-cidr=10.244.0.0/16
curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
pi@k8s-master-1:~ $ kubectl get pods --namespace=kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
etcd-k8s-master-1                      1/1       Running   0          5m
kube-apiserver-k8s-master-1            1/1       Running   0          5m
kube-controller-manager-k8s-master-1   1/1       Running   0          5m
kube-dns-686d6fb9c-xxrbg               3/3       Running   0          5m
kube-flannel-ds-gxt4n                  1/1       Running   0          23s
kube-flannel-ds-hngfv                  1/1       Running   0          2m
kube-flannel-ds-mgxdn                  1/1       Running   0          1m
kube-flannel-ds-qb8ch                  1/1       Running   0          3m
kube-proxy-4kxr8                       1/1       Running   0          1m
kube-proxy-54q5g                       1/1       Running   0          5m
kube-proxy-7zb4p                       1/1       Running   0          23s
kube-proxy-rwvp4                       1/1       Running   0          2m
kube-scheduler-k8s-master-1            1/1       Running   0          5m

danielvaughan commented Apr 21, 2018

I am running into the same problems as @carlosroman and @micedwards after applying weave on a 4 RPi 3 cluster:

Raspbian GNU/Linux 9 (stretch)
Docker version 18.04.0-ce, build 3d479c0
Kubernetes v1.10.1

pi@k8s-master-1:~ $ kubectl get pods --namespace=kube-system
NAME                                   READY     STATUS              RESTARTS   AGE
etcd-k8s-master-1                      1/1       Running             22         10h
kube-apiserver-k8s-master-1            1/1       Running             39         10h
kube-controller-manager-k8s-master-1   1/1       Running             13         10h
kube-dns-686d6fb9c-qn2mp               0/3       Pending             0          10h
kube-proxy-6dlz4                       1/1       Running             11         9h
kube-proxy-7s977                       1/1       Running             2          9h
kube-proxy-q7jlh                       1/1       Running             11         10h
kube-proxy-qdmp7                       1/1       Running             2          9h
kube-scheduler-k8s-master-1            1/1       Running             13         10h
weave-net-5scxb                        2/2       Running             1          2m
weave-net-5vxzw                        1/2       CrashLoopBackOff    4          2m
weave-net-jmlzc                        1/2       ImageInspectError   0          2m
weave-net-xc2f8                        1/2       ImageInspectError   1          2m
pi@k8s-master-1:~ $
Message from syslogd@k8s-master-1 at Apr 22 08:04:14 ...
 kernel:[  155.252476] Internal error: Oops: 80000007 [#1] SMP ARM

I am having more luck with flannel

sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=10.1.1.200 --pod-network-cidr=10.244.0.0/16
curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
pi@k8s-master-1:~ $ kubectl get pods --namespace=kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
etcd-k8s-master-1                      1/1       Running   0          5m
kube-apiserver-k8s-master-1            1/1       Running   0          5m
kube-controller-manager-k8s-master-1   1/1       Running   0          5m
kube-dns-686d6fb9c-xxrbg               3/3       Running   0          5m
kube-flannel-ds-gxt4n                  1/1       Running   0          23s
kube-flannel-ds-hngfv                  1/1       Running   0          2m
kube-flannel-ds-mgxdn                  1/1       Running   0          1m
kube-flannel-ds-qb8ch                  1/1       Running   0          3m
kube-proxy-4kxr8                       1/1       Running   0          1m
kube-proxy-54q5g                       1/1       Running   0          5m
kube-proxy-7zb4p                       1/1       Running   0          23s
kube-proxy-rwvp4                       1/1       Running   0          2m
kube-scheduler-k8s-master-1            1/1       Running   0          5m
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment