Skip to content

Instantly share code, notes, and snippets.

@svanellewee
Last active November 9, 2020 19:09
Show Gist options
  • Save svanellewee/2901514930f65b2322a77a52832ac027 to your computer and use it in GitHub Desktop.
Save svanellewee/2901514930f65b2322a77a52832ac027 to your computer and use it in GitHub Desktop.
kubernetes etcd backup and restoring...
#!/usr/bin/env bash
sudo apt install haproxy -y
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg 
frontend kubernetes
    bind 192.168.2.10:6443
    option tcplog
    mode tcp
    default_backend kubernetes-master-nodes

backend kubernetes-master-nodes
    mode tcp
    balance roundrobin
    option tcp-check
    server master-1 192.168.2.20:6443 check fall 3 rise 2
    server master-2 192.168.2.21:6443 check fall 3 rise 2
    server master-3 192.168.2.22:6443 check fall 3 rise 2
EOF

#!/usr/bin/env bash
cat <<EOF > /etc/hosts
192.168.2.10 loadbalancer
192.168.2.20 controller-0
192.168.2.21 controller-1
192.168.2.22 controller-2
192.168.2.30 worker-0
192.168.2.31 worker-1
EOF

One a single node run this to save a snapshot. Can be shared among nodes during restore:

 etcdctl snapshot save --cacert=$X/ca.crt --cert=$X/server.crt --key=$X/server.key /var/vm-shared/etcdsnappy

Note the shared directory /var/vm-shared/etcdsnappy (virtualbox shared directory)

Run the following on each node. This assumes the hostname is in the list

  • controller-0
  • controller-1
  • controller-2
export ETCDCTL_API=3
export X=/etc/kubernetec/pki/etcd
# --name must match a name in the --initial-cluster param
etcdctl snapshot restore \
        --cacert=$X/ca.crt \
        --cert=$X/server.crt \
        --key=$X/server.key  \
         /var/vm-shared/etcdsnappy \
        --name=$(hostname) \
        --initial-cluster-token=phoenix \   #-----------> any old  thing, but must be a different name from `etcd-default`
        --initial-cluster=controller-0=https://192.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380 \
        --initial-advertise-peer-urls=https://$(hostname -i):2380 \  # I might have added https://127.0.0.1:2380 to make this work...!!!!!!!!!
        --data-dir=/root/phoenix

here are the updates that I did to each node's /etc/kubernetes/manifests/etcd.yaml:

node-changes-controller-0:
 19a20
>     - --initial-cluster-token=phoenix
21c22
<     - --initial-cluster=controller-0=https://192.168.2.20:2380
---
>     - --initial-cluster=controller-0=https://193.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380
59c60
<       path: /var/lib/etcd
---
>       path: /root/phoenix


node-changes-controller-1:
 19a20
>     - --initial-cluster-token=phoenix
21d21
<     - --initial-cluster=controller-0=https://192.168.2.20:2380,controller-1=https://192.168.2.21:2380
23a24
>     - --initial-cluster=controller-0=https://193.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380
60c61
<       path: /var/lib/etcd
---
>       path: /root/phoenix  -> this is just the volume Mount on the host that changes


node-changes-controller-2:
 20,21c20,21
<     - --initial-advertise-peer-urls=https://192.168.2.22:2380
<     - --initial-cluster=controller-0=https://192.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380
---
>     - --initial-cluster-token=phoenix
>     - --initial-cluster=controller-0=https://193.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380
60c60
<       path: /var/lib/etcd
---
>       path: /root/phoenix  --> this is just the volume Mount on the host that changes

On the first controller/master add --pod-network-cidr and the env.IPADDR_RANGE params to match. Reason being weaveworks/weave#3363 (comment)

kubeadm init --apiserver-advertise-address=$(hostname -i) \
       --pod-network-cidr=10.32.0.0/12 \
       --upload-certs \
       --control-plane-endpoint=loadbalancer:6443 \
       -v9
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPADDR_RANGE=10.32.0.0/12"

--apiserver-advertise-address must be used if you have >1 network adapter else kubeadm will keep trying to use the wrong interface to try talk to the other nodes. --control-plane-endpoint remains the loadbalancer so that the certs can be set up correctly.

Remember to set up the hostnames in /etc/hosts

For another master node you remember to add the --apiserver-advertise-address again

kubeadm join loadbalancer:6443 --apiserver-advertise-address=$(hostname -i) --token ik0o8a.vmuhq0nzmxmevjx5     --discovery-token-ca-cert-hash sha256:5dc44bef6cfdb7f70fe04fbc90c2815c7da84c619ff8f4fa65d2feb3b26aa522     --control-plane --certificate-key c64b14cf1b46de2c3ee0ca6f657cb8e6e379aa76a35508daa3eaf01d95813a81
Vagrant.configure("2") do |config|
#config.vm.provider "virtualbox" do |vb|
# vb.memory="512"
#end
config.vm.box_check_update = false;
# Provision Load Balancer Node
config.vm.define "loadbalancer" do |node|
node.vm.box = "debian/buster64"
node.vm.provider "virtualbox" do |vb|
vb.name = "kubernetes-ha-lb"
vb.memory = 512
vb.cpus = 1
end
node.vm.hostname = "loadbalancer"
node.vm.network :private_network, ip: "192.168.2.10"
#node.vm.network "forwarded_port", guest: 22, host: 2730
node.vm.provision "shell", path: "./bin/setup-hosts"
node.vm.provision "shell", path: "./bin/setup-haproxy"
end
(0..2).each do |n|
config.vm.define "controller-#{n}" do |controller|
controller.vm.network :private_network, ip: "192.168.2.2#{n}"
#controller.vm.box = "debian/buster64"
controller.vm.box = "k8s-1.18.0"
#controller.vm.network "forwarded_port", guest: 22, host: "#{2710 + n}"
#node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
#node.vm.network "forwarded_port", guest: 22, host: "#{2710 + i}"
controller.vm.hostname = "controller-#{n}"
controller.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
vb.cpus = 2
end
controller.vm.synced_folder ".", "/var/vm-shared", create: true
controller.vm.provision "shell", path: "./bin/setup-hosts"
controller.vm.provision "shell", path: "./bin/load-images"
controller.vm.provision "shell", path: "./bin/convenience"
end
end
(0..1).each do |n|
config.vm.define "worker-#{n}" do |worker|
worker.vm.box = "k8s-1.18.0"
#worker.vm.box = "debian/buster64"
worker.vm.network :private_network, ip: "192.168.2.3#{n}"
#worker.vm.network "forwarded_port", guest: 22, host: "#{2810 + n}"
worker.vm.hostname = "worker-#{n}"
worker.vm.provider "virtualbox" do |vb|
vb.memory = "512"
vb.cpus = 1
end
worker.vm.synced_folder ".", "/var/vm-shared", create: true
worker.vm.provision "shell", path: "./bin/setup-hosts"
worker.vm.provision "shell", path: "./bin/load-images"
worker.vm.provision "shell", path: "./bin/convenience"
end
end
#config.vm.provision "shell", path: "provision.sh"
end
# NOTE
# The k8s-1.18 is just a debian image with kubeadm kubelet kubectl and docker set up as per the kubernetes website. (also adds that special kernel module)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment