Skip to content

Instantly share code, notes, and snippets.

@dwitzig
Last active April 10, 2024 06:09
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save dwitzig/1c3f0a3ed215a5852a4dc40cc608e876 to your computer and use it in GitHub Desktop.
Save dwitzig/1c3f0a3ed215a5852a4dc40cc608e876 to your computer and use it in GitHub Desktop.

Deploy k3s cluster on Vultr with RancherOS over ZeroTier network

How to deploy K3S cluster on RancherOS nodes connected via a Zerotier network.

Deploy master node

1. Update the master-deploy.sh with deployment vars

SSH_KEY='ssh-rsa XXX'

REGION='au'
NET_IFACE='zt0' # zerotier interface name. 
NETWORK_ID='zerotier-network-id' # zerotier network configured via https://my.zerotier.com
CLUSTER_SECRET='super-secret-cluster-key' # random k3s cluster secret (must be same for all nodes) 

I'm deploying via Vultr and using their metadata service to set node details. If you're not using Vultr pls update with equivalent or manually set.

# Vultr metadata service
V4_PRIVATE_IP=`wget -qO-  http://169.254.169.254/v1/interfaces/1/ipv4/address`
V4_PUBLIC_IP=`wget -qO- http://169.254.169.254/current/meta-data/public-ipv4`
INSTANCE_ID=`wget -qO- http://169.254.169.254/current/meta-data/instance-id`
HOSTNAME=`wget -qO- http://169.254.169.254/current/meta-data/hostname`

2. Deploy using ipxe

Make master-deploy.sh public available (@ least temporarily ) then update and use a ipxe script bellow to boot rancherOS and install using master-deploy.sh

#!ipxe

# Location of your shell script.
set cloud-config-url http://<url-to-master-deploy.sh>ros-cc-zt-k3s-server.sh

set base-url http://releases.rancher.com/os/latest

kernel ${base-url}/vmlinuz rancher.cloud_init.datasources=[cmdline] cloud-config-url=${cloud-config-url}
initrd ${base-url}/initrd
boot

After a few minuets check your Zerotier network. you will see a new member awaiting auth.

The k3s server and node are stuck in a wait loop until Zerotier interface is initiated and IP available

Helpful tip. If u want to manually control IP address. Set IP before ticking Auth

3. Get kubeconfig and connect

SSH into your new node, you can find the kubeconfig here /opt/k3s-config/kubeconfig.yaml

You will need to update server: https://0.0.0.0:6443 with your ZeroTier IP or public IP

You should now be able to see the master node when running kubectl get nodes

Deploy worker nodes

This time use worker-deploy.sh follow steps 1 & 2 above.

Make sure you use the same CLUSTER_SECRET and set MASTER_IP to the ZT IP address of the master node.

a few minuets after you authorise the new node in ZT you should see it when running kubectl get nodes

#!/bin/bash
SSH_KEY='ssh-rsa XXX'
REGION='au'
NET_IFACE='zt0'
NETWORK_ID='zerotier-network-id'
CLUSTER_SECRET='super-secret-cluster-key'
# Vultr metadata service
V4_PRIVATE_IP=`wget -qO- http://169.254.169.254/v1/interfaces/1/ipv4/address`
V4_PUBLIC_IP=`wget -qO- http://169.254.169.254/current/meta-data/public-ipv4`
INSTANCE_ID=`wget -qO- http://169.254.169.254/current/meta-data/instance-id`
HOSTNAME=`wget -qO- http://169.254.169.254/current/meta-data/hostname`
cat > "cloud-config.yaml" <<EOF
#cloud-config
hostname: $HOSTNAME
ssh_authorized_keys:
- $SSH_KEY
write_files:
- path: "/opt/zerotier-one/devicemap"
permissions: '0600'
owner: 999:999
content: |
$NETWORK_ID=$NET_IFACE
- path: "/opt/zerotier-one/networks.d/$NETWORK_ID.local.conf"
permissions: '0600'
owner: 999:999
content: |
allowManaged=1
allowGlobal=1
allowDefault=1
- path: "/opt/zerotier-one/local.conf"
permissions: '0600'
owner: 999:999
content: |
{
"settings": {
"interfacePrefixBlacklist": [ "veth","cni","docker","flannel","virbr" ]
}
}
EOF
cat >> "cloud-config.yaml" <<'EOF'
- path: "/opt/k3s/wait-for-network.sh"
permissions: '0755'
# owner: 999:999
content: |
#!/bin/sh
# wait-for-network.sh
set -e
net=$1
shift
cmd=$@
until [ -n "$(ifconfig | grep -A 1 $net | tail -1 | cut -d ":" -f 2 | cut -d " " -f 1)" ]; do
>&2 echo "Network $net IP not avalable yet, wating..."
sleep 5
done
IP=$(ifconfig | grep -A 1 $net | tail -1 | cut -d ":" -f 2 | cut -d " " -f 1)
>&2 echo "Network $net IP = $IP, continue..."
export K3S_URL="https://$IP:6443"
CMD=$(echo $cmd | sed -e "s/-IP-/$IP/g")
echo $CMD
exec $CMD
EOF
cat >> "cloud-config.yaml" <<EOF
rancher:
sysctl:
net.bridge.bridge-nf-call-iptables: 1
net.ipv4.ip_forward: 1
debug: false
cloud_init:
datasources:
- ec2
state:
formatzero: true
fstype: auto
dev: LABEL=RANCHER_STATE
autoformat:
- /dev/vda
- /dev/sda
network:
dns:
nameservers:
- 1.1.1.1
- 8.8.8.8
interfaces:
eth0:
dhcp: true
eth1:
address: $V4_PRIVATE_IP/16
mtu: 1450
#dhcp: false
services:
zerotier:
image: dwitzig/zerotier:1.2.12
labels:
io.rancher.os.scope: system
volumes:
- /opt/zerotier-one:/var/lib/zerotier-one
restart: always
net: host
devices:
- /dev/net/tun:/dev/net/tun
cap_add:
- NET_ADMIN
- SYS_ADMIN
volumes_from:
- system-volumes
entrypoint: /zerotier-one
zerotier-join:
image: dwitzig/zerotier:1.2.12
labels:
io.rancher.os.scope: system
volumes:
- /opt/zerotier-one:/var/lib/zerotier-one
restart: on-failure
net: host
entrypoint: /zerotier-cli join $NETWORK_ID
depends_on:
- zerotier
k3s-server:
image: rancher/k3s:v0.7.0
restart: always
net: host
entrypoint: /wait-for-network.sh
command: $NET_IFACE /bin/k3s server --disable-agent --kube-apiserver-arg=service-node-port-range=27017-32767 --kubelet-arg='address=0.0.0.0' --node-ip -IP- --no-deploy traefik --flannel-iface $NET_IFACE --tls-san -IP- --tls-san 127.0.0.1 --bind-address 0.0.0.0
environment:
- K3S_CLUSTER_SECRET=$CLUSTER_SECRET
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- /opt/k3s/wait-for-network.sh:/wait-for-network.sh
- /opt/k3s:/k3s
- /opt/k3s-server:/var/lib/rancher/k3s
# This is just so that we get the kubeconfig file out
- /opt/k3s-config:/output
ports:
- 6443:6443
k3s-node:
image: rancher/k3s:v0.7.0
restart: always
environment:
- K3S_CLUSTER_SECRET=$CLUSTER_SECRET
net: host
tmpfs:
- /run
- /var/run
privileged: true
entrypoint: /wait-for-network.sh
command: $NET_IFACE /bin/k3s agent --node-ip -IP- --flannel-iface $NET_IFACE -s https://-IP-:6443
volumes:
# mount any local store pathes needed by services into k3s node
# - /opt/<persistant-local-store>:/opt/<persistant-local-store>
- /opt/k3s:/k3s
- /opt/k3s/wait-for-network.sh:/wait-for-network.sh
depends_on:
- k3s-server
EOF
sudo ros install -d /dev/vda -c cloud-config.yaml
sudo reboot
#!/bin/bash
SSH_KEY='ssh-rsa XXX'
REGION='au'
NET_IFACE='zt0'
NETWORK_ID='zerotier-network-id'
CLUSTER_SECRET='super-secret-cluster-key'
MASTER_IP='master-zt-ip'
# Vultr metadata service
V4_PRIVATE_IP=`wget -qO- http://169.254.169.254/v1/interfaces/1/ipv4/address`
V4_PUBLIC_IP=`wget -qO- http://169.254.169.254/current/meta-data/public-ipv4`
INSTANCE_ID=`wget -qO- http://169.254.169.254/current/meta-data/instance-id`
HOSTNAME=`wget -qO- http://169.254.169.254/current/meta-data/hostname`
cat > "cloud-config.yaml" <<EOF
#cloud-config
hostname: $HOSTNAME
ssh_authorized_keys:
- $SSH_KEY
write_files:
- path: "/opt/zerotier-one/devicemap"
permissions: '0600'
owner: 999:999
content: |
$NETWORK_ID=$NET_IFACE
- path: "/opt/zerotier-one/networks.d/$NETWORK_ID.local.conf"
permissions: '0600'
owner: 999:999
content: |
allowManaged=1
allowGlobal=1
allowDefault=1
- path: "/opt/zerotier-one/local.conf"
permissions: '0600'
owner: 999:999
content: |
{
"settings": {
"interfacePrefixBlacklist": [ "veth","cni","docker","flannel","virbr" ]
}
}
EOF
cat >> "cloud-config.yaml" <<'EOF'
- path: "/opt/k3s/wait-for-network.sh"
permissions: '0755'
# owner: 999:999
content: |
#!/bin/sh
# wait-for-network.sh
set -e
net=$1
shift
cmd=$@
until [ -n "$(ifconfig | grep -A 1 $net | tail -1 | cut -d ":" -f 2 | cut -d " " -f 1)" ]; do
>&2 echo "Network $net IP not avalable yet, wating..."
sleep 5
done
IP=$(ifconfig | grep -A 1 $net | tail -1 | cut -d ":" -f 2 | cut -d " " -f 1)
>&2 echo "Network $net IP = $IP, continue..."
export K3S_URL="https://$IP:6443"
CMD=$(echo $cmd | sed -e "s/-IP-/$IP/g")
echo $CMD
exec $CMD
EOF
cat >> "cloud-config.yaml" <<EOF
rancher:
sysctl:
net.bridge.bridge-nf-call-iptables: 1
net.ipv4.ip_forward: 1
debug: false
cloud_init:
datasources:
- ec2
state:
formatzero: true
fstype: auto
dev: LABEL=RANCHER_STATE
autoformat:
- /dev/vda
- /dev/sda
network:
dns:
nameservers:
- 1.1.1.1
- 8.8.8.8
interfaces:
eth0:
dhcp: true
eth1:
address: $V4_PRIVATE_IP/16
mtu: 1450
#dhcp: false
services:
zerotier:
image: dwitzig/zerotier:1.2.12
labels:
io.rancher.os.scope: system
volumes:
- /opt/zerotier-one:/var/lib/zerotier-one
restart: always
net: host
devices:
- /dev/net/tun:/dev/net/tun
cap_add:
- NET_ADMIN
- SYS_ADMIN
volumes_from:
- system-volumes
entrypoint: /zerotier-one
zerotier-join:
image: dwitzig/zerotier:1.2.12
labels:
io.rancher.os.scope: system
volumes:
- /opt/zerotier-one:/var/lib/zerotier-one
restart: on-failure
net: host
entrypoint: /zerotier-cli join $NETWORK_ID
depends_on:
- zerotier
k3s-node:
image: rancher/k3s:v0.7.0
restart: always
environment:
- K3S_CLUSTER_SECRET=$CLUSTER_SECRET
net: host
tmpfs:
- /run
- /var/run
privileged: true
entrypoint: /wait-for-network.sh
command: $NET_IFACE /bin/k3s agent --node-ip -IP- --flannel-iface $NET_IFACE -s https://$MASTER_IP:6443
volumes:
# mount any local store pathes needed by services into k3s node
# - /opt/<persistant-local-store>:/opt/<persistant-local-store>
- /opt/k3s:/k3s
- /opt/k3s/wait-for-network.sh:/wait-for-network.sh
EOF
sudo ros install -d /dev/vda -c cloud-config.yaml
sudo reboot
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment