This guide has moved to a GitHub repository to enable collaboration and community input via pull-requests.
https://github.com/alexellis/k8s-on-raspbian
Alex
This guide has moved to a GitHub repository to enable collaboration and community input via pull-requests.
https://github.com/alexellis/k8s-on-raspbian
Alex
#!/bin/sh | |
# This installs the base instructions up to the point of joining / creating a cluster | |
curl -sSL get.docker.com | sh && \ | |
sudo usermod pi -aG docker | |
sudo dphys-swapfile swapoff && \ | |
sudo dphys-swapfile uninstall && \ | |
sudo update-rc.d dphys-swapfile remove | |
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \ | |
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \ | |
sudo apt-get update -q && \ | |
sudo apt-get install -qy kubeadm | |
echo Adding " cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" to /boot/cmdline.txt | |
sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt | |
orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" | |
echo $orig | sudo tee /boot/cmdline.txt | |
echo Please reboot |
That dashboard is a 404. Should it be https://rawgit.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-arm.yaml |
The swapfile turns back on when you reboot unless you
|
For this line |
Kubernetes please stop changing every other day |
I followed the instructions and got everything installed on a 2x Raspberry PI 3 cluster (1 master and 1 node). But, I have not been able to get the Dashboard up and running. olavt@k8s-master-1:~ $ kubectl get svc -n kube-system What is the Url I should use from my other computer to connect to the Dashboard? |
OK for the dashboard you need to run kubectl on your own PC/laptop. Maybe an SSH tunnel would work?
then try 127.0.0.1:8001 on your local machine |
That didn't work for me. |
First of all thanks for the detailed setup process. After updating raspbian i ran into the problem that See https://archlinuxarm.org/forum/viewtopic.php?f=15&t=12086#p57035 and raspberrypi/linux@ba742b5 |
after installation the status of all pods in namespace kube-system is pending except kube-proxy (NodeLost). Any ideas? |
My dashboard wouldn't work properly until I did: I could get to the dashboard using |
@alexellis it should be cgroup_memory=1 not cgroup_enable=memory |
cgroup_enable=memory seems to be fine under kernel 4.9.35-v7. |
I've updated the instructions for the newer RPi kernel. |
I had to run the "set up networking" step (install weave) in order to get "Running" back from the 3 DNS pods. Before that, they reported "Pending"... move the "set up networking" step before "check everything worked" in your instructions? |
I was also only able to get both Master and 1 "slave" node to the Ready status when I first installed the "weave" networking on the master, and only after that joined the worker. K8s version 1.9. |
Has anyone experienced an issue Running on Raspian Stretch 4.9.59+. |
@caedev - no, you are definitely using a Raspberry Pi 2 or 3? |
Sorry, just realised I was ssh'ing into the wrong pi; this works absolutely fine on my Pi 2. Thanks for writing this @alexellis - much appreciated. |
same experience as @charliesolomon, DNS doesn't come up until you install the weave network driver. Basically change to below:
Note: Be patient on the 2nd step, the weave driver comes up first. Once it is |
In the dashboard section, you might want to mention the need for rbac: https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges |
An excellent guide, thank you! The instructions are unclear for accessing the cluster remotely but are explained here: Effectively make a copy on the local machine of the master's Then Or, per your example to create a proxy: To avoid specifying |
Follow-up (kubeadm) question: What's the process to shutdown and restart the cluster?
What if you'd just like to shut the cluster down correctly to then shutdown the underlying Pis and restart subsequently? |
Have been playing around with this over the weekend, really enjoying the project! I hit a block with Kubernetes Dashboard, and realised that I couldn't connect to it via proxy due to it being set as a ClusterIP rather than a NodeIP.
$ kubectl -n kube-system edit service kubernetes-dashboard
$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.108.252.18 <none> 80:30294/TCP 23m
$ ssh -L 8001:127.0.0.1:31707 pi@k8s-master-1.local
Thanks again Alex! |
Hi, Alex, thank for share this tutorial. I builded a raspberry pi cluster and is running kubernetes and OpenFaas as expected it. the only thing is that the auto-scaling don't in OpenfaaS does work! on my computer works but it does work in the cluster! Do I have to change something in the .yml files? I check them but they look the same. |
I had to add both cgroup_memory=memory AND cgroup_memory=1 to the cmdline.txt to get it to work. |
Great and very understandable post !! |
Thanks for the fantastic guide, I had great fun learning about all these topics in practice over a weekend. As a switch I'm having great success with the 5-port D-Link DGS-1005D, newer versions of which use mini-USB for power. I had issues getting Weave to work on Raspbian Stretch and the Pi3 B+. Shortly after running
I also managed to set up the master as a router, with Wifi on the WAN side, using the steps in this particular post https://www.raspberrypi.org/forums/viewtopic.php?f=36&t=132674&start=50#p1252309 |
Thanks @Jickelsen I had to do the same. Fixed this by removing I was then able to run |
I can't seem to get init past |
Thank You Alex. Very detailed steps. I am using a b plus Pi as a master. Any idea why the Pi goes dead slow on initiating the Kube master. |
Thanks @Jickelsen & @DerfOh! I spent all my spare time in the last three weeks trying to get kubernetes to work again. The gist worked great at Xmas but now once you get weavenet up on the node & synced to the master, both crash with an oops: |
I've had strange issues with getting weavenet running
Still debuging it but it has been a fun learning experience getting K8s running on a Raspberry Pi cluster. @micedwards, I ended up writting an ansible playbook as kept rebuilding my cluster to see why weave kept crashing. Wrote it after running |
Good Evening. I have been having trouble getting kubernetes+docker running as a 2 RPI cluster. My master node continues to reboot. I followed all the steps above to configure two fresh nodes, except I used my router to establish a static IP for my master and worker node. Interestingly my worker node seems stable so far right now. In previous attempts, when I had set up 4 additional nodes they too became unstable. Docker version: 18.03.0-ce, build 0520e24 Master node: pi@k8boss1:~ $ kubectl get pods --namespace=kube-systemNAME READY STATUS RESTARTS AGE pi@k8boss1:~ $ kubectl get nodes pi@k8boss1:~ $ uptime Worker: Any thoughts? |
On Raspbian Stretch Lite, the installation halts during the master setup phase (
I found it necessary to install Kubernetes 1.9.6:
Took 552.013509 seconds to complete, but it's up and running now! Thanks for a great tutorial! |
I am running into the same problems as @carlosroman and @micedwards after applying weave on a 4 RPi 3 cluster: Raspbian GNU/Linux 9 (stretch)
I am having more luck with flannel
|
If you use flannel instead of weave networking the kernel oops does not occur. You can install flannel using |
I have the same issue - hanging at 'sudo kubeadm init' on master at line: I am using a raspberry pi 2 B+ Have used various raspbians Wheezy/Stretch various kubernetes up to latest (inc. 1.9.6 as suggested by PeterKing above) and various docker versions. Anyone with this running on raspberry Pi 2 with recent raspbian, able to share version of all components (raspbian + kubernetes + docker)? Please im sick of reflashing my SD :) |
Many many thanks for this bootstrap introduction ! I was facing issues with the latest version ( v1.10.2 - 28-04-2018 ) and after loosing some (more) hair - kube-apiserver was dying in loop ultimately leading to fail of kudeadm init - , I tried to downgrade both kubeadm and kubelet to 1.9.7-00 and - for now as it's a fresh start - things are up on my RPI3 cluster ... Cross finger :)
|
Kudo for @Jickelsen |
@alexellis: Thanks for a great guide! |
Im seeing Weavenet being mentioned but I can’t see that anyone has logged an issue with kubeadm or weave - I’d suggest doing that if you are seeing unexpected behaviour with newer versions of the components. The init step can take several minutes. |
I have attempted to get Weave Net to work with I forked your gist and made the modifications including changing the script (prep.sh) Try it out
|
@alexellis : thanks for the guide, it really helped me. I've been trying to get it working with Weave for a couple of days, but in the end I gave up and went with @aaronkjones 's idea. I used flannel as the CNI and got it working on the first try. |
Same here: @aaronkjones 's guide is what worked for me as well. I took the liberty of creating a variant of this gist for those who want to use Hypriot. It also covers networking setup a bit more in-depth (local ethernet for the cluster, wifi connection via the master to reach the outside world): https://gist.github.com/elafargue/a822458ab1fe7849eff0a47bb512546f . Still a work in progress. |
Just as a heads up - @aaronkjones solution was working for me perfectly last week, but I added new worker nodes to my existing cluster today and the new nodes don't initialise flannel or kube-proxy: flannel:
kube-proxy:
Turns out, as of the last couple of days the Step in @alexellis instructions above (which installs latest version of docker):
I just downgraded the docker version on my nodes to
You can check out the packages added to the repo lists by using:
Hope this helps someone! Massive thanks to @alexellis and everyone else in this thread who have got me a working K8s cluster on rPi's - learnt loads! |
Hangs @ Fixed by downgrading kubeadm, kubectl, and kubelet to 1.9.6: AND Downgrading to Docker 18.04: |
https://github.com/aaronkjones/rpi-k8s-node-prep I modified the script to allow for a specific version of Docker and Kubeadm to be installed and also pinned to prevent upgrade. I have 4 RPis, so i made two two-node clusters and tried different combinations of Kubeadm/Docker. Docker 18.04 and Kubeadm 1.10.2-00 work for me. It has been running on Hypriot for a few days. |
Where or how do we report the issue to? |
I couldn't get 1.10.3-00 working either. For 1.10.2-00, in addition to downloading, installing, and holding the right packages, when you init the master, you need to set the version there too (otherwise, it'll default download the latest stable control images, which are 1.10.3) sudo kubeadm init --token-ttl=0 --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.10.2 |
@njohnsn, probably an issue on the Kubernetes repository at https://github.com/kubernetes/kubernetes/issues I ran into the same issue, was getting errors like the one in this comment with the latest version of kubelet: geerlingguy/raspberry-pi-dramble#100 (comment) I uninstalled docker-ce then reinstalled with The init command I used (after installing with
I had to downgrade both Kubernetes (from 1.10.3 to 1.10.2) and Docker CE (from 18.05.0 to 18.04.0) to Kubernetes to boot and run on Debian Stretch (Raspbian Lite)... but I finally got to:
|
For some reason I can't get my fourth node to go into the ready state. I've blown the SD card a way and reinstalled everything from scratch twice, but unlike the other 3 nodes, it won't come up. Here is the output from syslog:
Thanks! |
@njohnsn I never got weave to work but have got flannel working. After looking into the issue I couldn't resolve the whole To get flannel to work I had to update I think you won't need that if you run |
I can report (eventual) success with the following configuration on my 4 Raspberry pis:
And thanks Alex for this original post and others who commented. Hoping this helps someone else struggling .... |
I can not have dashboard work using proxy. I got the error message:
I followed the steps and learned some rbac along the way, but still can not figure out where to look at to solve this. Any suggestions? I also found this from kubernetes/dashboard readme:
From what I have so far, my cluster did not install Heapster. Is it necessary to mention that in this guide? |
I am late to the fun :) has anyone followed this on the latest? I have not been able to progress beyond this Unfortunately, an error has occurred: This error is likely caused by: If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: Appreciate your time and help |
@Creamen downgrading as you advised worked for me. Wasted 2 nights. Step 1: uninstall kubernetes, follow the below given commands Step 2: reboot your pi Step 3: Install v1.9.7-00, follow the below given commands Step 4: Initiate your master node |
Still no luck.
Where do I find the logs for the dns pod? Thanks! |
Found the logs for the DNS pod"
It appears the DNS process is trying to check on the api status on port 443 when the api server is running on port 6443. -Neil |
Turns out the answer was here |
@njohnsn Logs are shown by
|
@kumardeepam Thanks for you work and instructions on how to downgrade (uninstall the latest version and install v1.9.700) when getting an error "[WARNING KubeletVersion]: couldn't get kubelet version: exit status 2" My master is running now! Also had to set cgroup_memory=memory not cgroup_memory=1. |
useful way of downgrading docker is after running -- run this to downgrade without having to go through a full uninstall -- |
Thanks for the tip @tomtom215! Following the thread above i wonder if we need to downgrade both the k8s and docker versions? Or just downgrading docker is enough? |
Hi there, did someone succeed to deploy it with docker 18.05 and k8s 1.11 ? I tried to but got issue I don't understand, I'm new to docker and k8s... Got this when trying to join :
and the controller pod keep crashing on my masternode then :
There is a configuration that work well for someone ? Which versions and which CNI ? Thanks by advance :) |
Mine seems to be working fine . I'm using ce=18.04.0
|
docker-ce=18.04.0 kubectl edit deploy kube-dns --namespace=kube-system to upgrade the image to 1.14.10 from 1.14.8 fixed it. If you dont get the framwork correct you get some weird errors my cluster is now happily running the armhf version of OpenFAAS and the arm version of prometheues-operator , and using the docker-incubator nfs-client-provisioner to handle PV and PVC,s served from an NFS Server which is not part of the cluster Andy |
I am unable to get past the CGROUPS_MEMORY: missing issue after running sudo kubeadm init --token-ttl=0 |
@gianlazz: same thing for me, had to revert to |
I'm still unable to run a cluster at all. I'm blocked at the init:
Image used:
Docker version:
Kubernetes version: 1.10.2 |
@Gallouche Apparently the kube-controller-manager CrashLoopBackOff will be fixed in 1.11.1 according to kubernetes/kubernetes#65674 |
Thank you all for the answers, my bachelor degree work will maybe be done in time ! I will try, thanks a lot ! |
@deurk it looks like your kubelet service is not running, kubeadm has a dependancy on kubelet which does seem to be documented very well.
The problem seems to be that /etc/kubernetes/kubelet.conf is missing in the initial installation, after the first run of kubeadm init a copy of the kubelet.conf gets created. |
alright so after 2 days of googling in multiple forums coming back to this tutorial many times and formating my varous microsd cards over and over again, i finally go it to work by replacing the step where it says "Add repo lists & install kubeadm" to: and where it said "Edit /boot/cmdline.txt"
hopefully it will help some tired soul... and yes i understand I'm running an old version of kubernetes, i don't care its working and i just wanna learn... |
While I'd like to get this working with latest I was successful following @kumardeepam to downgrade version as well as of July 6, 2018. The dns pods weren't starting, and using a specific version of weave brought them online as per @haebler advice. Thanks to all contributors and @alexellis for putting this together. Thus, the steps for success with a slightly older version circa July 2018 are as follows: changes in pre-reqsudo vi /boot/cmdline.txt #changes in install Install v1.9.7-00, follow the below given commands$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && install weave 1.6kubectl apply -f https://git.io/weave-kube-1.6 no change for init$sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.0.100 --ignore-preflight-errors=ALL should work as expected |
I went through the process and am still having this issue:
EditI ran EditRunning into another error now that if I understand correctly is due to the dns pods not having been started correctly however when I run So I'm still left with this below:
|
Thanks @futurisma, this did not solve my problem so I reinstalled my cluster with kube* in 1.9.7 per @mrpaws instructions and that worked. Now I'm waiting for 1.11.1 to try again :) |
I'm still having the same issue after trying every single one of the above suggestions :( |
@mgazza Which issue is that? |
@kumardeepam Solution works, Thanks! I had the Had luck with flannel. Weave was giving me the loopBackError This post from @aaronkjones was important:
This post also had something to say : ps: in between attempts I cleared things: Everything seemed good now, Now I can ping master from nodes: |
@deurk history
1 curl -sSL get.docker.com | sh && sudo usermod pi -aG docker
2 sudo dphys-swapfile swapoff && sudo dphys-swapfile uninstall && sudo update-rc.d dphys-swapfile remove
3 sudo nano /boot/cmdline.txt
4 sudo reboot
5 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update -q && sudo apt-get install -qy kubelet=1.9.7-00 kubectl=1.9.7-00 kubeadm=1.9.7-00
6 kubectl apply -f https://git.io/weave-kube-1.6
7 sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.1.12 --ignore-preflight-errors=ALL [init] Using Kubernetes version: v1.9.9
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [raspberrypi kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection, so the kubelet cannot pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-arm:v1.9.9
- gcr.io/google_containers/kube-controller-manager-arm:v1.9.9
- gcr.io/google_containers/kube-scheduler-arm:v1.9.9
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster uname -a
Linux raspberrypi 4.14.50-v7+ #1122 SMP Tue Jun 19 12:26:26 BST 2018 armv7l GNU/Linux @aaronkjones I had no joy with your repo either, using raspbian or Hypriot |
Kubernetes 1.11.1 has been released! Following this gist (with the /boot/cmdline.txt change) I've got openfaas & faas-cli working! AND that's with weavenet rather than flannel. [Not sure where I got the heapster.yaml file to get the dashboard working as I lost my notes for that]. `pi@shepherd:~/faas-functions $ uname -a pi@shepherd:~/faas-functions $ docker version Server: pi@shepherd:~/faas-functions $ sudo apt list kube* Off to buy more SD cards to do a backup. |
Thanks Alex! |
After a lot of trouble with Hypriot's tutorial and this gist as well, I was able to successfully deploy and run the markdownrender on two pis with @aaronkjones's guide. Much thanks to @alexellis and @aaronkjones for the development of these guides as well as the discussion in the comments. |
hi All, I think I'm OK - can anyone tell me if I need to run Weave on the workers? If so, I'm getting a port 8080 error. Many TIAs ;-)
|
@micedwards how did you get the latest 1.11.1 to even have a successful install? |
OK, 1.11.2 is out. I will give it a spin tonight or tomorrow to see how it fares with this Gist. |
Took me a bit longer than expected to get to test it but I managed to make it work
|
Thank you very much for the write-up, very straight-forward and working like a charm. Just make sure to update the Raspbian images to the latest kernel version and to follow the instructions. On stretch light you don't have to create the dhcpcd.conf file yourself for static IP's, just adjust the existing file. And also sometimes it helps to wait for a bit, getting all the kube-system containers in ready and running state can take a while..now getting ready for OpenFaaS and maybe running some in-house Ghost-blogs. |
@deurk How could you get it work? I init k8s 1.11.2 but I get this error: "failed to pull image [k8s.gcr.io/kube-proxy-arm:v1.11.2]: exit status 1". Can you help me fix it? |
Just wanted to share here as I had been watching this thread off and on for a while as @aaronkjones directed me here. The latest deployment using Ansible for all of this now works again. I am using Weave BTW. |
Has anyone been able to get their Rpi3 k8s cluster integrated with Gitlab-CE? I'm trying to integrate my cluster right now and it's failing to install Helm-Tiller through Gitlab CE - just failing to connect in general really. |
figured i'd contribute some deviations from the instructions that helped me, i managed to get this running (28/08/2018) by specifying version 1.8.3 on kubelet, kubectl, kubeadm install, and using the flannel network. |
Thank you for your example install commands for v1.9.7 @chito4! I was that tired sould you referenced that needed that info to get this working. Got it running on Raspberry Pi 3B+, 16GB Micro SD card, Raspbian Stretch Lite OS, Kubernetes v1.9.7 (kubeadm, kubectl), Weave CNI, Docker 18.06.1-ce I haven't attempted any nodes except the master node so far, but will update if I run into problems. |
I was able to get a 7-node Raspberry Pi cluster running using:
Here is my exact system state:
Note: Here's a really easy way to append the When I initialized the cluster, I used the following command:
|
Still not joy using the scripts. cat /proc/device-tree/model
Raspberry Pi 2 Model B Rev 1.1 uname -a
Linux node-1 4.14.50-v7+ #1122 SMP Tue Jun 19 12:26:26 BST 2018 armv7l GNU/Linux curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
> echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
> sudo apt-get update -q && \
> sudo apt-get install -qy kubeadm
OK
deb http://apt.kubernetes.io/ kubernetes-xenial main
Hit:1 http://archive.raspberrypi.org/debian stretch InRelease
Get:2 http://raspbian.raspberrypi.org/raspbian stretch InRelease [15.0 kB]
Hit:4 https://download.docker.com/linux/raspbian stretch InRelease
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages [18.3 kB]
Fetched 42.3 kB in 3s (13.3 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
kubeadm is already the newest version (1.12.0-rc.1-00). sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.1.100 --ignore-preflight-errors=ALL
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
[WARNING KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [192.168.1.100 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.100]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster |
Hi @alexellis, thanks for your great work.
|
For those who want to save their SD cards running K8S like this (I know I spend a lot of them for fun and games) the quick and dirty fix and its really simple to add them to this howto : I assume you have a raspbian / rsyslog / whatever NAS running somewhere on you're local network. DO NOT DO THIS OVER THE NET unless you have a hackwish ;) On your NAS: add before GLOBAL DIRECTIVES sudo nano /etc/rsyslog.conf # provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
$template DynaFile,"/<YOUR NAS PATH>/%HOSTNAME%/%syslogfacility-text%.log"
*.* -?DynaFile $ sudo systemctl reload rsyslog (make sure your NAS firewall accepts 514 incoming). Then on all "masters & workers" comment everything out in /etc/rsyslog.conf after the "RULES" section and add a little something like this: $ sudo nano /etc/rsyslog.conf # To remote syslog server
*.* @@192.168.x:y:514 sudo systemctl reload rsyslog I tried to keep it as simple as possible, again QUICK AND DIRTY adjust to your own situation. No need for complicated remote logging services and so on. And trust me your SD card will be a lot happier!!! |
Thx again to that post. Unfortunately it does not work for me. So far the latest 1.9.x version is useable for me. More on that can be found here. |
Running the get.docker.com script fails for me unless I disable swap first. |
thx @denhamparry, worked for me without |
The issue with kubeadm init crashing on v1.12.x is due to the kube-apiserver container running out of memory (code 137). I've put up a bug report in kubernetes/kubeadm: kubernetes/kubeadm#1279 Hopefully, we can get a solution put together so we can run the latest kubernetes... |
To get the current flannel manifest (https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml) to work on 1.12.2 I had to apply the patch suggested here:
|
I just went through the entire process, and as of today (2018-12-03), it is very very unstable and fragile. Recapping for anybody that is spending sleepless nights on it:
|
Very nice! #kubectl get no I use the below code to create the cluster: --On Master node-- root: curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml| sed "s/amd64/arm/g" | kubectl create -f - kubectl -n kube-system patch daemonset kube-flannel-ds sysctl net.bridge.bridge-nf-call-iptables=1 --On Slaves node-- kubeadm join 192.168.x.x:6443 --token --discovery-token-ca-cert-hash sha256: --On Master node-- kubectl cordon pi-master |
Many thanks Alex for a great post and thanks to Denham I incorporated your comments to my dashboard config with help from this post to get me up and running https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640 having experienced the same issues with RBAC permissions in the post. (from... $ kubectl create serviceaccount dashboard.... I hope this helps anyone who is getting to grips with K8S RPI cluster!! ::D |
New problem seems to have cropped up on flannel: coreos/flannel#1060
To allow the pod to start successfully, SSH onto the worker and run |
Install Kubernetes 1.13.1 on Raspberry Pi ClusterThis comment combines the knowledge of this gist and the many comments above plus the workings of https://github.com/aaronkjones/rpi-k8s-node-prep Download Raspbian Stretch Lite (2018-11-13 4.14 kernel), flash to sd cards for your cluster (in my case 5 cards).Once flashed and BEFORE you boot the pis, set up the networking by mounting the sd card (you can of course just boot the Pis and setup networking on each machine, my personal preference is to do this in advance): Turn on ssh: Enable C-Groups:
I'm using wired networking on a 192.168.2.xx subnet so setup the host entries:
Unmount the sd card, then in turn mount the other 4 sd cards repeating the steps above changing the ip address. Once you have setup all 5 sdcards put them into the pis and power on the cluster. SSH to each in turn and complete the configuration and install the software with the below steps.
Setup master node01
Save the join token and token hash, it will be needed in "Setup slave nodes02-05" Make local config for pi user, so login as pi on node01
Check it's working (except the dns pods wont be ready)
Setup kubernetes ovverlay networking
Setup slave nodes02-05Join the cluster using the join token and token hash when you ran kubeadm on node01
Back on the master node01 check the nodes have joined the cluster and that pods are running:
Deploy dashboardDeploy the tls disabled version of the dashboard
To access the dashboard start the proxy on node01: Then from your pc point your browser at: |
@andyburgin I followed your instructions, but I can't get the master node running... The
I waited for some time until all pod were "Running":
(is it possible that kube-dns and kube-proxy are missing?) Then I applied the two weave-net files you mentioned:
But the weave-net pod will not become "Running"...
10.96.0.1 seems to be the kubernetes service IP:
|
Oooookayy... I finally managed to get it working \o/ I wrote a small bash script that checks for |
I just set up a working cluster but couldn't get the master running on an RPI 2. Moved SD card over to a RPI 3 and then |
Wondering if anyone has gotten helm/tiller working in this configuration? |
@rnbwkat I got it working but I had to specify a different tiller image, one which was compatible with ARM. The command I used was:
|
@janpieper. I’ve run into the “node not found”. looking through all the comments I was going to follow the save steps you did. I wonder what versions of k8s and docker you’ve installed |
I tried to setup the cluster following the steps described but still didn't get a succesful kubeadm init. I tried different versions of k8s and docker. Is there somebody who has the steps to get 1.13-3 working with 18.09.0 |
@janpieper can you share the script? |
@janpieper steps worked up until the point everyone mentioned, and rather than the script that polls and zaps the config, I found you can do the same (after the initial failure) by running these commands (lifted from this issue kubernetes/kubeadm#1380)
Then I installed flannel.
Something that threw me off was the shell demo that Kubernetes provides works fine (kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml) docs here: But it fails when doing a deployment of nginx from their example here: Turns out the nginx image isn't compatible with ARM, once I changed the image to a pi supported image (tobi312/rpi-nginx |
This comment has been minimized.
This is great. It'd be very cool to have this operate unattended. (or as unattended as possible)