Skip to content

Instantly share code, notes, and snippets.

@mjhuber
Last active April 6, 2020 14:53
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mjhuber/43c6d8c5631e7acdcfbe3b74d7aef7bb to your computer and use it in GitHub Desktop.
Save mjhuber/43c6d8c5631e7acdcfbe3b74d7aef7bb to your computer and use it in GitHub Desktop.
taint based gke upgrade

GKE Upgrades

  1. Upgrade the masters. This is done by incrementing the google_container_cluster.min_master_version field to the desired version. Masters will update one by one.
  2. Create new node pools using the new version.
  3. Taint the old nodes. Pods will get scheduled onto the new nodes.
$ kubectl get nodes --no-headers -l cloud.google.com/gke-nodepool=<node-pool-name> | awk '{print $1}' | xargs -I kubectl taint nodes {} legacy=true:NoExecute
  1. Leave the old node pools around for a couple of weeks but scale them to 0. They'll be available if we need to go back.

What if you need to roll back?

increase the old legacy versioned node pool count and then taint the new node pools. The pods will get scheduled on the old nodes.

I want to test this on nonprod workloads first.

Prod workloads should be on their own node pools anyways so they're isolated. If they're on their own node pool you can do this.

Advantages

  • no redeploying applications into a new cluster
  • no updating dns records
  • cluster name does not change every time an upgrade is performed

What if i want more control over the eviction process?

Then change to using a NoSchedule taint, and drain each node. This method will obey pod disruption budgets.

$ kubectl get nodes --no-headers -l cloud.google.com/gke-nodepool=<node-pool-name> | awk '{print $1}' | xargs -I kubectl taint nodes {} legacy=true:NoSchedule

$ kubectl get nodes --no-headers -l cloud.google.com/gke-nodepool=<node-pool-name> | awk '{print $1}' | xargs -I kubectl drain {} --ignore-daemonsets --delete-local-data --force
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment