Skip to content

Instantly share code, notes, and snippets.

@say-kaj
Last active October 5, 2021 15:49
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save say-kaj/5e44c02c58bf397434bd49265ab52554 to your computer and use it in GitHub Desktop.
Save say-kaj/5e44c02c58bf397434bd49265ab52554 to your computer and use it in GitHub Desktop.
SetUp my mobile Raspberry

Up, up and away

A RasPi ToGo environment, works with any mobile system (even phone) as well as with my MacBook. Initial thought really was to have something Kubernetes to use as demo in meetings or talks. So far this is a Kubernetes with WiFi accesspoint. I'll take this gist to track my experiences with this :)

Hardware used

All exposed services are going to avaible via the router IP - which in our config is 192.168.50.1 or the given IP on the "WAN" side.

Services so far:

  • RaspAP : 80
  • Prometheus : 30000
  • Grafana : 32000

Setting up the Raspberry

I usually just burn the official Image (Buster right now) to a SD Card. For that I take balenaEtcher (go to setting to NOT unmout the device after verifying). When done go to the boot drive and create an empty file called ssh. By that SSH is enabled during first boot already. I have not attached Raspberries to a monitor since ages.

Then I just startup the device and do the usual stuff:

  • Full Update of the raspbian OS
  • change the hostname to somethin different (here raspToGo). Otherwise you are getting all types of issue on the local network if using .local advertisments (keep in mind that hostname.local is getting broadcasted on the network)
  • if you are supersecure - create a new user and delete the pi user. I usually don't do that as I use either a demo setup under very controlled circumstances OR use a heavy password. If deleting the pi User,... keep in mind to change the sudoer list accordingly.
  • I am a Sublime Text user and have RemoteSubl in use to remotely edited text files.

setup remote editing on raspberry

credits go here

On the raspi

sudo wget -O /usr/local/bin/rsub https://raw.github.com/aurora/rmate/master/rmate
sudo chmod a+x /usr/local/bin/rsub

On the MacBook

Adding RemoteForward 52698 127.0.0.1:52698 to my .ssh/config file. I just put it at topmost as this is a valid setting for all my linux based device. As said above - I have the RemoteSubl Package installed.

now I can just type rsub mytext.file on the raspi I ssh'd in and it opens the file on my Macbook in sublime text. Very neat

make it a working AP

Run the RaspAp Quick Installer with $ curl -sL https://install.raspap.com | bash.

Fix the issue preventing WiFi scanning from working with $ sudo wpa_supplicant -B -Dnl80211,wext -c/etc/wpa_supplicant/wpa_supplicant.conf -iwlan0.

Make sure that the dhcp-range configuration in dnsmasq.conf looks like this: dhcp-range=192.168.50.50,192.168.50.150,12h

install docker

$ curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
$ sudo usermod -aG docker pi
$ sudo apt-get install libffi-dev libssl-dev
$ sudo apt-get install -y python python-pip
$ sudo apt-get remove python-configparser
$ sudo pip install docker-compose

the docker-compose will run for a short while. After that we have a functioning docker environment already. Technically this already would be sufficient to run containerized applications. Usually for a single PI I would stop here, fetch my github home project and start it up. As this is going to be a demonstration device, we will add a Single node Kubernetes Cluster as well

install Kubernetes

base Kubernates config & setup

Kubernetes will not work with swap enabled. Do this:

$ sudo dphys-swapfile swapoff
$ sudo dphys-swapfile uninstall
$ sudo update-rc.d dphys-swapfile remove

Add cpu, memory into cgroup recouces (note this probably doesn’t want to be run multiple times).

$ orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory"
$ echo $orig | sudo tee /boot/cmdline.txt
$ sudo reboot

The /boot/cmdline.txt will end up looking something like this:

$ dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=e6462c02–02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_enable=memory
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update -q
$ sudo apt-get install -qy kubeadm

initialize the Cluster

Now we need to initialize the master (which in this case happens to be the node as well)

$ sudo kubeadm init --token-ttl=0 --pod-network-cidr=10.244.0.0/16

Here you will get the first time contact with a booting kubernetes,... and see all the funny errors you thought you fixed already.. I got a "don't work with swap" here. Darn! I appearently forgot to do what I described above.

This also takes a while as it pulls the required images from the internet

To be able to start your newly build cluster,... you need to create some basic configs as pi user

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

After the K8s install it already told you, that a pod network is required. Generically this can be done by kubectl apply -f [podnetwork].yaml with one of the options listed at https://kubernetes.io/docs/concepts/cluster-administration/addons/

I am using Flannel, hence it goes like

$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Networking breaks still - weekend to do to get this working. Node shows as "not Ready"

After this I would be set for a normal cluster and could start adding nodes to it via

$ kubeadm join IP:PORT --token TOKEN --discovery-token-ca-cert-hashs sha256:SHATOKEN

As this is a single node cluster, there is one final step to do so that normal PODs can run on the Master Node

$ kubectl taint nodes --all node-role.kubernetes.io/master-

Tweaks

export KUBE_EDITOR="nano" kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh -> open a busybox shell within the k8s Cluster

Add-On stuff

K8s Dashboard

Deploy the K8s Dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

As this is a Development environment I am lowering the overall security as the dashboard usually is only accessable via locally --> localhost:port

To do so

$ kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

opens a vi to the dashboard yaml. Scolling down to allows me to change type: ClusterIP to type: NodePort.

Now I need the exposed port as well as the master IP

$ kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.100.124.90   <nodes>       443:31707/TCP   21h

$ kubectl cluster-info
Kubernetes master is running at https://xxx.xxx.xxx.xxx:xxxx
KubeDNS is running at https://xxx.xxx.xxx.xxx:xxxx/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 

Next piece is to get a Bearer token to access the dashboard. For that I created a Service Account and a ClusterRoleBinding

dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

dashboard-adminuser-rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

with

$ kubectl apply -f dashboard-adminuser.yaml
$ kubectl apply -f dashboard-adminuser-rolebinding.yaml

I can create the Service Account and get the role bound. With the below the bearer Token is displayed

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

It should look like similar to

Name:         admin-user-token-n6pbl
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 8e17b45c-d9e8-4f4c-afe2-abecadc83a39

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiI

This can be entered into the Dashboard that can be found under masterIP:exposed port. Eh Voila - I have a dashboard up and running.

GIT

From a tooling perspective I need also git available. Should be part of a raspbian image, but just in case I do a sudo apt-get install git

Monitoring with Prometheus

I am going to segregade all monitoring components into a dedicated namespace

git clone https://github.com/bibinwilson/kubernetes-prometheus ~/kubernetes-prometheus
kubectl create namespace monitoring

The git clone should provide a ClusterRole yaml which I exectute with kubectl create -f clusterRole.yaml - this is creating role and rolebinding for prometheus Also there should be a basic config map, which you can execute with kubectl create -f config-map.yaml. This is giving a basic set of monitoring rules. By executing kubectl create -f prometheus-deployment.yamlthe deployment is created.

The given file will mount the config map into /etc/prometheus

To check if the deploymant was successful

$ kubectl get deployments --namespace=monitoring
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
prometheus-deployment   0/1     1            0           14s

To access I could either do a port forwarding or set this up as a Service. As I don't want to do localhost stuff via SSH I going to "service" it The git pull provides an appropriate service yaml. Execute with kubectl create -f prometheus-service.yaml --namespace=monitoring. This is exposing prometheus to the nodes IP Adress.

first troubleshooting

Appearently something is not working... opening the Prometheus targets page shows everythng red. Except one endpoint - which is using an IP Address. So assumptin would be that I have a DNS resolution issue with the kubernetes internal coreDNS.

The below get pods reveals that they are all in a CrashLoopBackOff condition.

$ kubectl get pods -ALL
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE     L
kube-system            coredns-5d95487b75-9f5r5                     0/1     CrashLoopBackOff   6          8m14s
kube-system            coredns-5d95487b75-bdlq8                     0/1     CrashLoopBackOff   6          8m14s
kube-system            coredns-6955765f44-mr2l4                     0/1     CrashLoopBackOff   178        32h
kube-system            etcd-rasptogo                                1/1     Running            1          32h
kube-system            kube-apiserver-rasptogo                      1/1     Running            1          32h
kube-system            kube-controller-manager-rasptogo             1/1     Running            1          32h
kube-system            kube-flannel-ds-arm-mt8fb                    1/1     Running            3          31h
kube-system            kube-proxy-vxx7h                             1/1     Running            1          32h
kube-system            kube-scheduler-rasptogo                      1/1     Running            1          32h
kubernetes-dashboard   dashboard-metrics-scraper-76585494d8-q6c26   1/1     Running            1          30h
kubernetes-dashboard   kubernetes-dashboard-5996555fd8-7zgff        1/1     Running            1          30h
monitoring             prometheus-deployment-77cb49fb5d-v5n48       1/1     Running            1          23h

Researching the internet says that this

$ kubectl -n kube-system get deployment coredns -o yaml | \
$ sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
$ kubectl apply -f -
$ kubectl -n kube-system delete pod -l k8s-app=kube-dns

would sort the issue... but it does not. so more digging... in kubectl -n kube-system edit configmap coredns I found a line in the prometheus section calling loop - remarking this and another kubectl -n kube-system delete pod -l k8s-app=kube-dns did sort it for a couple of minutes, but quickly turned back into a CrashLoopBackOff condition.

So - read the internet... ALL of it... seems like all of the above is correct to be fixed, but the real issue is the local system. tested this

$ kubectl get pods -ALL
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE    L
default                debug                                        1/1     Running            0          71m
kube-system            coredns-5d95487b75-4vbkf                     0/1     CrashLoopBackOff   24         109m
kube-system            coredns-5d95487b75-lxpns                     0/1     CrashLoopBackOff   24         109m
$ sudo systemctl stop systemd-resolved
$ sudo nano /etc/resolv.conf

It looks like that resolvconf is putting localhost in and 127.0.0.1 is confusing coreDNS into a loop condition. so changed this to a google one as this is only relevant on DNS forwards anyway. Does not really affect K8s internal name resolution

# Generated by resolvconf
domain raspToGo.local
nameserver 8.8.8.8
# nameserver 127.0.0.1

Then a reset of the DNS and checking if it works.

$ kubectl -n kube-system delete pod -l k8s-app=kube-dns
pod "coredns-5d95487b75-4vbkf" deleted
pod "coredns-5d95487b75-lxpns" deleted
$ kubectl get pods -ALL
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    L
default                debug                                        1/1     Running   0          114m
kube-system            coredns-5d95487b75-6pkl9                     1/1     Running   0          16s
kube-system            coredns-5d95487b75-r28kr                     1/1     Running   0          16s

After letting it running for a couple of minutes... it seemes to work now. to manifest this the last step here is to fully disable sudo systemctl disable systemd-resolved this.

CHAKA

Visualize with Grafana

For visualization I use Grafana - unfortunatley I have not found someone providing all requiered files - so this is going by hand.

Starting with a nano grafana-datasource-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: monitoring
data:
  prometheus.yaml: |-
    {
        "apiVersion": 1,
        "datasources": [
            {
               "access":"proxy",
                "editable": true,
                "name": "prometheus",
                "orgId": 1,
                "type": "prometheus",
                "url": "http://prometheus-service.monitoring.svc:8080",
                "version": 1
            }
        ]
    }

With this I create the datasource config kubectl create -f grafana-datasource-config.yaml. Next is to nono deployment.yaml a deployment description and put grafana-deployment.yaml in place.

 apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:latest
        ports:
        - name: grafana
          containerPort: 3000
        resources:
          limits:
            memory: "2Gi"
            cpu: "1000m"
          requests: 
            memory: "1Gi"
            cpu: "500m"
        volumeMounts:
          - mountPath: /var/lib/grafana
            name: grafana-storage
          - mountPath: /etc/grafana/provisioning/datasources
            name: grafana-datasources
            readOnly: false
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
              defaultMode: 420
              name: grafana-datasources

Executing this with kubectl create -f grafana-deployment.yaml and nan0o aservice yaml

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '3000'
spec:
  selector: 
    app: grafana
  type: NodePort  
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 32000

Now we can access grafana at port 32000. Login is with admin/admin and requires an immediate change of the password

Helm and Tiller

I actually should have done this earlier already as Helm helps to automate and manage this much better. So I likely need to reverse engineer some of the above stuff

$ curl -LO https://git.io/get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --tiller-image=jessestuart/tiller:latest --upgrade

references

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment