Skip to content

Instantly share code, notes, and snippets.

@patrick0057
Last active October 6, 2020 13:23
Show Gist options
  • Save patrick0057/8925b44c5521ec4ec8d6d73520e27a0b to your computer and use it in GitHub Desktop.
Save patrick0057/8925b44c5521ec4ec8d6d73520e27a0b to your computer and use it in GitHub Desktop.
Restoring RKE cluster with incorrect or missing rkestate file

Overview

When using RKE 0.2.0 and newer, if you have restored a cluster with the incorrect rkestate file you will end up a state where your infrastructure pods will not start. This includes all pods in kube-system, cattle-system and ingress-nginx. As a result of these core pods not starting, all of your workload pods will be unable to function correctly. If you find yourself in this situation you can use the directions below to fix the cluster.

Recovery

  1. Delete all service-account-token secrets in kube-system, cattle-system and ingress-nginx namespaces.
{
kubectl get secret -n cattle-system | awk '{ if ($2 == "kubernetes.io/service-account-token") system("kubectl -n cattle-system delete secret " $1) }'
kubectl get secret -n kube-system | awk '{ if ($2 == "kubernetes.io/service-account-token") system("kubectl -n kube-system delete secret " $1) }'
kubectl get secret -n ingress-nginx | awk '{ if ($2 == "kubernetes.io/service-account-token") system("kubectl -n ingress-nginx delete secret " $1) }'
}
  1. Restart docker on all nodes in the cluster currently (you should really only have one node in the cluster if you just restored).
systemctl restart docker
  1. Force delete all pods stuck in a terminating state
{
kubectl get po --all-namespaces | awk '{ if ($4 =="CrashLoopBackOff") system("kubectl delete po --force --grace-period=0 -n " $1 " " $2) }'
kubectl get po --all-namespaces | awk '{ if ($4 =="Terminating") system("kubectl delete po --force --grace-period=0 -n " $1 " " $2) }'
kubectl get po --all-namespaces | awk '{ if ($4 =="Error") system("kubectl delete po --force --grace-period=0 -n " $1 " " $2) }'
}
  1. Once your force delete has finished, restart docker again to clear out any stale containers from the above force delete command.
systemctl restart docker
  1. You may have to delete service account tokens more than once or delete pods more than once. After you go through the guide once, monitor pod statuses with a watch command in one terminal as shown below.
watch -n1 'kubectl get po --all-namespaces | grep -i  "cattle-system\|kube-system\|ingress-nginx"'

If you see any pods still in an error state, you can describe them to get idea of what is wrong. Most likely you'll see an error like the following which indicaqtes that you need to delete its service account tokens again.

  Warning  FailedMount  7m23s (x126 over 4h7m)  kubelet, 18.219.82.148  MountVolume.SetUp failed for volume "rancher-token-tksxr" : secret "rancher-token-tksxr" not found
  Warning  FailedMount  114s (x119 over 4h5m)   kubelet, 18.219.82.148  Unable to attach or mount volumes: unmounted volumes=[rancher-token-tksxr], unattached volumes=[rancher-token-tksxr]: timed out waiting for the condition

I usually will just delete the service account tokens again for that one namespace so that pods in other namespaces don't have to be disturbed if they are good. Once the service account tokens are deleted, run a delete pod command for just the namespace with pods still in an error state. cattle-node-agent and cattle-cluster-agent depend on the Rancher pod to be online, so you can ignore those until the very end. Once Rancher pods are stable, I usually go back in and delete all the agents again to get them to quickly come back online.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment