Skip to content

Instantly share code, notes, and snippets.

@radekg
Last active November 15, 2021 02:23
Show Gist options
  • Save radekg/8872660e4bff87c4c0b884c47d6a8647 to your computer and use it in GitHub Desktop.
Save radekg/8872660e4bff87c4c0b884c47d6a8647 to your computer and use it in GitHub Desktop.
k3s on multipass: calico + longhorn

Install multipass

Instructions: https://multipass.run/

Launch VMs

multipass launch --cpus 2 --disk 8G --mem 4G --name k3s-master ubuntu
multipass launch --cpus 4 --disk 10G --mem 8G --name k3s-node-1 ubuntu
multipass launch --cpus 4 --disk 10G --mem 8G --name k3s-node-2 ubuntu
multipass launch --cpus 4 --disk 10G --mem 8G --name k3s-node-3 ubuntu
multipass launch --cpus 4 --disk 10G --mem 8G --name k3s-node-4 ubuntu

Boostrap k3s master

With Calico

multipass exec k3s-master -- /bin/bash -c \
  'curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --disable-network-policy --cluster-cidr=192.168.0.0/16" sh -'

The CIDR is hardcoded by the custom-resources.yaml definition. It can be changed but Calico YAML also has to be adjusted.

multipass exec k3s-master -- kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

Use custom-resources.yaml from this gist:

multipass exec k3s-master -- kubectl create -f https://gist.github.com/radekg/8872660e4bff87c4c0b884c47d6a8647/raw/af01571e766c7f0c17ffdd4c4d4ca77ced5d53b0/custom-resources.yaml

Without Calico

multipass exec k3s-master -- /bin/bash -c \
  'curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -'

Get data required to bootstrap nodes

export K3S_MASTER_IP=$(multipass list --format json | jq '.list[] | select(.name == "k3s-master").ipv4[0]' -r)
export K3S_TOKEN=$(multipass exec k3s-master -- /bin/bash -c "sudo cat /var/lib/rancher/k3s/server/node-token")

Bootstrap nodes

multipass exec k3s-node-1 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=https://${K3S_MASTER_IP}:6443 sh -"
multipass exec k3s-node-2 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=https://${K3S_MASTER_IP}:6443 sh -"
multipass exec k3s-node-3 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=https://${K3S_MASTER_IP}:6443 sh -"
multipass exec k3s-node-4 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=https://${K3S_MASTER_IP}:6443 sh -"

Verify

multipass exec k3s-master -- watch kubectl get nodes

Wait for all nodes to become ready.

Get k3s config locally

multipass exec k3s-master -- /bin/bash -c "sudo cat /etc/rancher/k3s/k3s.yaml" > $HOME/.k3s.yaml
sed -i "s/127.0.0.1/"${K3S_MASTER_IP}"/" $HOME/.k3s.yaml

Install helm:

Instructions: https://helm.sh/docs/intro/install/#from-snap

Setup kubectl

cd /tmp
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
cd -

Longhorn

helm repo add longhorn https://charts.longhorn.io
helm repo update
multipass exec k3s-master -- kubectl create namespace longhorn-system
KUBECONFIG=$HOME/.k3s.yaml helm install longhorn longhorn/longhorn --namespace longhorn-system

Validate longhorn

This might take a while and the command will have to be repeated:

multipass exec k3s-master -- watch kubectl -n longhorn-system get pod

When all is Running, describe service:

kubectl get service longhorn-frontend -n longhorn-system -o wide

Output looks like:

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
longhorn-frontend   ClusterIP   10.43.55.109   <none>        80/TCP    79m   app=longhorn-ui

Run an access pod:

kubectl run access --rm -ti --image alpine:3.14 /bin/sh

In the container:

 apk add curl
 curl -v http://10.43.55.109

Output looks like:

*   Trying 10.43.55.109:80...
* Connected to 10.43.55.109 (10.43.55.109) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.43.55.109
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.20.1
< Date: Mon, 15 Nov 2021 01:10:43 GMT
< Content-Type: text/html
< Content-Length: 1025
< Last-Modified: Thu, 07 Oct 2021 11:29:55 GMT
< Connection: keep-alive
< Vary: Accept-Encoding
< ETag: "615eda33-401"
< Cache-Control: max-age=0
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html lang="en">

<head>
 <meta charset="UTF-8">
 <meta name="viewport" content="width=device-width, initial-scale=1.0">
 <meta http-equiv="X-UA-Compatible" content="IE=edge">
 <!--[if lte IE 10]>
     <script
       src="https://as.alipayobjects.com/g/component/??console-polyfill/0.2.2/index.js,media-match/2.0.2/media.match.min.js"></script>
 <![endif]-->
 <style>
   ::-webkit-scrollbar {
     width: 10px;
     height: 1px;
   }

   ::-webkit-scrollbar-thumb {
     border-radius: 10px;
     -webkit-box-shadow: inset 0 0 5px rgba(0,0,0,0.1);
     background: #535353;
   }
 </style>
<link href="./styles.css?fc8ec4140abd497a1b70" rel="stylesheet"></head>

<body>
 <div id="root"></div>
<script type="text/javascript" src="./runtime~main.2bf68965.js?fc8ec4140abd497a1b70"></script><script type="text/javascript" src="./styles.22f36a89.async.js?fc8ec4140abd497a1b70"></script><script type="text/javascript" src="./main.c3b456f8.async.js?fc8ec4140abd497a1b70"></script></body>

</html>
* Connection #0 to host 10.43.55.109 left intact

TODO

Setting up basic auth by following instructions from https://longhorn.io/docs/1.2.2/deploy/accessing-the-ui/longhorn-ingress/ doesn't work. Basic auth does not seem to be working. Maybe it's Traefik. Gotta try a Kubernetes cluster without Traefik.

Delete everything

multipass delete --all --purge
# This section includes base Calico installation configuration.
# For more information, see: https://docs.projectcalico.org/v3.21/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
containerIPForwarding: Enabled
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://docs.projectcalico.org/v3.21/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment