Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Kubernetes Master Nodes Backup for Kops on AWS - A step-by-step Guide

Kubernetes Master Nodes Backup for Kops on AWS - A step-by-step Guide

For those who have been using kops for a while should know the upgrade from 1.11 to 1.12 poses a greater risk, as it will upgrade etcd2 to etcd3.

Since this upgrade is disruptive to the control plane (master nodes), although brief, it's still something we take very seriously because nearly all the Buffer production services are running on this single cluster. We felt a more thorough backup process than the currently implemented Heptio Velero was needed.

To my surprises, my Google searches didn't yield any useful result on how to carry out the backup steps. To be fair, there are a few articles that are specifically for backing up master nodes created by kubeedm but nothing too concrete for kops specifically. We knew we had to try things out on our own.

We would very much love to share our experiences with the community and potentially hear what everyone needed to do with this upgrade. Now, let's jump in!

Creating backups

Locate the master nodes and noting the devices attached

Yeah, let's do some backups. But where? We have found the easiest way to back up master nodes is to back up their EBS volumes. This should be easy right? But like everything in tech, there are always smaller bits and pieces we need to watch out for. A thing as complex as Kubernetes + kops + AWS is unsurprisingly no exception. To locate the right EBS volumes let's look at the screenshot below.

http://hi.buffer.com/0f811b62bd5a/%255B63bc1e9c969b5ef5b874583982a24569%255D_Image%2525202019-07-24%252520at%2525202.02.08%252520PM.png

It's important to note there are 2 block devices for each master node and they are both important. One is for etcd-main while the other one is for etcd-events.

Creating a snapshot from each volume (3 masters x 2 devices each = 6 snapshots)

Now, let's try to create a snapshot for each volume. Since our Kubernetes cluster is running on 3 master nodes for High Availability. We will need to do this 6 times! From the screenshoot you should see all the tags assigned to each volume. They are important because kops rely on them to attach volumes back to master nodes. For now, let's just acknowledge this. I will provide more details very soon.

http://hi.buffer.com/60e0275f9ccb/%255Bb4551d0bb80c620caa27153e2e22ebac%255D_Image%2525202019-07-24%252520at%2525202.36.14%252520PM.png

Rinse and repeat for 6 times, we should have 6 snapshots ready.

http://hi.buffer.com/38c2ff4c1bab/%255B35297af8e3c15339622bb114365aff93%255D_Image%2525202019-07-24%252520at%2525202.49.29%252520PM.png

Now, let's take a pause and talk about the tags I mentioned earlier. It's important that each volume to have the right tags. Here is a table that will come handy when creating the volumes. Yeah, this means 30 (6 volumes and 5 tags each) tags needed for a 3 master node setup.

Key Value
KubernetesCluster steven.buffer-k8s.com
Name b.etcd-(main/events).steven.buffer-k8s.com
k8s.io/etcd/(main/events) b/b,c,d
k8s.io/role/master 1
kubernetes.io/cluster/steven.buffer-k8s.com owned

Creating volumes

As the screenshot shows, it's important to make sure each volume is created in the same Availability Zone as the intended master node. Otherwise they won't be able to find each other. For now, we will leave one value as stub since we still have existing volumes attached.

http://hi.buffer.com/1fee9608e841/%255B3a774379989091d111e1937a1fdd8fdb%255D_Image%2525202019-07-24%252520at%2525203.02.10%252520PM.png

After this is done, we should have six backup volumes ready to go as soon as we swap out the stub value to steven.buffer-k8s.com. This concludes our backups.

http://hi.buffer.com/945319abf4c5/%255Bc58bc210f162c7ef3822c2267da1b8ca%255D_Image%2525202019-07-24%252520at%2525203.26.19%252520PM.png

[Optional] Upgrading the cluster

This step is totally optional. The only purpose is to demonstrate how to revert a bad cluster upgrade (1.11 to 1.12) using the backup/restore strategy describe in this article.

Note the master nodes are now in 1.12, and our intention is to roll everything back to 1.11. Let's see if we can do that.

http://hi.buffer.com/c139e03f9a75/%255Bcc3ae4d16c9642e26d1951f1f990b097%255D_Image%2525202019-07-24%252520at%2525203.39.52%252520PM.png

Restoring from backups

Detach existing volumes and delete them. This will break all master nodes, for now

It should be obvious by now that in order to restore from backups we will need to remove all existing, attached volumes, first. The screenshots below show where this is done.

http://hi.buffer.com/da22a4960323/%255B5d33ddcbef3677c45f5340b0127c5ef3%255D_Image%2525202019-07-24%252520at%2525203.45.39%252520PM.png

http://hi.buffer.com/db3aea1d1c1b/%255B9f43149c4cbdcef97645a5737e10c171%255D_Image%2525202019-07-24%252520at%2525203.49.32%252520PM.png

Add the missing tag value to the backup volumes

With the old volumes detached and deleted, the backup volumes created earlier are ready to be attached to the master nodes. But first, we will need to add back the right tag value as the screenshot shows.

http://hi.buffer.com/f7124e636224/%255B51a0f1eca889489e702c80de9b753f78%255D_Image%2525202019-07-24%252520at%2525203.52.51%252520PM.png

We are now right on our final step. Just hang in there! For the master nodes to pick up the backup volumes, we will need to recreate all of them. This step is as simply as terminating the nodes because kops will automatically spin up new nodes, and attach the volumes we created.

http://hi.buffer.com/df841676c627/%255B66932bd93e72d25a9e51c988ffd111fe%255D_Image%2525202019-07-24%252520at%2525205.46.54%252520PM.png

Profit! All nodes back to 1.11

http://hi.buffer.com/972a8a78a368/%255Bb93cae36a9cfa200d5acc8005a06283f%255D_Image%2525202019-07-24%252520at%2525205.49.21%252520PM.png

Closing words

Thanks for bearing with me on this long post that involves many steps. I believe we are right now at a very interesting stage of Kubernetes adoption. While it has made an amazing progress in the last few years, the ecosystem is still catching up. For the longest time, CI/CD on Kubernetes was a challenge, then we faced the issue of observability. Fortunately, vendors like Datadog et al are continuously rolling out new offerings to address all kinds of challenges. Buffer being an early adopter of Kubernetes is truly blessed to be in a position to witness all these transitions, and contribute the best we can to the community.

If you have any thoughts, questions. Feel free to hit me up on Twitter. Until then, I hope you have fun with Kubernetes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.