Setup k3s multi-node cluster to run in multiple physical machine in local network (home lab).
Install k3s nodes(with Traefik): https://computingforgeeks.com/install-kubernetes-on-ubuntu-using-k3s/
Edit the k3s config to disable Traefik: k3s-io/k3s#1160 (comment)
Install Nginx Ingress: https://kubernetes.github.io/ingress-nginx/deploy/
- Pi4 2GB (Ubuntu 32bit) as Master x 1
- Ubuntu 20.04 (64bit) as Worker x 2
- My laptop as Client machine: where I use kubectl to interact with cluster.
- Install openssh for all nodes:
sudo apt update && sudo apt upgrade -y
sudo apt install openssh-server
- Identity each node local ip addess
ip addr
Set hostname for all nodes.
sudo vi /etc/hostname
TIP: hostname can be: master-01, node-01 node-0x, ect.
In master nodes bind the worker nodes with their own hostname in /etc/hosts
e.g
<worker node ip> <worker node hostname>
In worker nodes bind the master nodes with their own hostname in /etc/hosts
.
- Install docker (Ubuntu 20.04) Note! it can be different for other Ubuntu version.
Add Docker APT repository:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
Install Docker CE on Ubuntu 20.04:
sudo apt update
sudo apt install docker-ce -y
Start & Enable docker
sudo systemctl start docker
sudo systemctl enable docker
Review docker
systemctl status docker
Run docker without sudo
sudo groupadd docker
sudo gpasswd -a $USER docker
newgrp docker
Link: https://itectec.com/ubuntu/ubuntu-how-to-use-docker-without-sudo/
- Setup master nodes (ssh to master node first):
curl -sfL https://get.k3s.io | sh -s - --docker --no-deploy traefik --node-taint
Note! this will disable traefik when installing. If you have already installed with traefik, you can disable it. View one of my references in this gist. --node-taint: tell the master nodes does not run any work load. Pods will be scheduled in worker nodes.
Review k3s status in master node
systemctl status k3s
All firewall in master nodes:
sudo ufw allow 6443/tcp
sudo ufw allow 443/tcp
5.Grab the token to add worker nodes later (join_token):
sudo cat /var/lib/rancher/k3s/server/node-token
- Install k3s on worker nodes and connect them to the master (Ssh to worker nodes first)
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> sh -s - --docker
Review agent just install
sudo systemctl status k3s-agent
- Check the added nodes From ssh in master node
kubectl get nodes
For now all step later is advanced steps and optional. ------------------------------------------ Interact cluster remotely ------------------------------------------
- Install loadbalancer to allow using kubectl remotely from client machine. From client machine
- Install nginx.
- Use this nginx.conf (attachment) for loadbalancer.
- Config cluster config in client machine.
-
Install kubectl in client machine if you don't have. Link install guide: https://kubernetes.io/docs/tasks/tools/
-
From master node: Copy this file sudo
/etc/rancher/k3s/k3s.yaml
from master node to client machine. -
From client machine (I use MacOs). Find (if you have) or create file
~/.kube/config
-
If file exist, then try to merge it with k3s.yaml we copied above (you need to understand what is kube config). Remember to set current context to our k3s cluster.
-
If file does not exist, just copy the content of k3s.yaml to
~/.kube/config
.
Now from client machine:
kubectl get nodes
You should see all our remote nodes.
------------------------------------------ Install kubernetes dashboard (Run remotely)-------------------------------
From client machine:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
Reference: https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/ Check new version: https://github.com/kubernetes/dashboard/releases
Create dashboard.admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
Create file dashboard.admin-user-role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Create user and role
kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
Print the token (using for login later to dash board UI app, so save it somewhere in your machine):
kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token'
Run dashboard:
kubectl proxy
Navigate to this url and pass the token for login.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
---------------------------------------------- Install nginx ingress ---------------------------------------------------
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml