Skip to content

Instantly share code, notes, and snippets.

@rimusz
Forked from tsuri/preemtible_kubernetes.md
Last active July 7, 2018 02:52
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save rimusz/bd5c9fd9727522fd546329f61d4064a6 to your computer and use it in GitHub Desktop.
Save rimusz/bd5c9fd9727522fd546329f61d4064a6 to your computer and use it in GitHub Desktop.
kubernetes cluster w/ pre-emptible instances

Kubernetes clusters using preemtible instances

Motivation

People experimenting with kubernetes clusters on the GKE not necessarily have money to keep a full cluster on at all time. GKE clusters can be easily resized, but this still incurs in the full instance cost when the cluster is up.

Google has added preemptible instances that are ideal for many applications seeing spikes in usages or for simply experimenting with scaling. Preemptible instances last 24 hours top and can be destroyed by google w/ 30 seconds advance notice.

Howto

Here's how I've got a kubernetes cluster using a mix of normal and preemtible interface. It should be possible to have preemptible only cluster, but I haven't tried it.

Creating a preemtible pool

Create an additional pool

The fisrt step is to create a preemptible pool and make so instances belonging to it join an existing kubernetes cluster.

gcloud container node-pools create preemptible-reserve --machine-type=n1-standard-1 --num-nodes=1

Unfortunately we need to create at least one VM here, as 0 is not accepted.

kubectl get node -o wide
NAME                                          STATUS    AGE
gke-CLUSTER-default-pool-0d56be95-f0eo         Ready     7d
gke-CLUSTER-default-pool-0d56be95-xt05         Ready     7d
gke-CLUSTER-preemtible-reserve-d5a5e03b-3kjt   Ready     42s

At this point we have two pools.

Make the pool preemptible

Make the template preentible

In the dashboard, go to instance template. Select and copy the template containing preemtible-reserve in it. We need to copy because otherwise we're not allowed to edit. This maybe because the template is in use, I haven't verified.

While in the copy page open the tab Management/Disk/Network. The default tab (Management) will have a availability policy/preemptability setting. Switch it to On.

Click on create. The template name will be OLD_NAME-1.

If you go to the template page, you'll see that the old template is in use by some group, while the new one is unused.

Make the pool use the new template

Click on the group used by the old template. Click on edit at the top. Select the template and make the '-1' version the one used.

IMPORTANT: the page will warn you that this change only apply to newly created VMs and will have a link to a gcloud command that restart existing VMs with the new setting. Remember we have that one lone VM from when we create the new pool. You have two options:

  • destroy the machine (see later). New machines will be preemptible.
  • execute the command given in this page to convert the existing VM to preemptible. The command looks like:
gcloud compute --project
"PROJECT-NAME" instance-groups managed recreate-instances "GROUP"
--zone "us-east1-b" --instance "INSTANCE-NAME"

so even if you didn't write it down, you should be able to reproduce it.

Save the edited group

At this point you can convert the existing instance has explained above or resize the preemptible-reserve pool to 0 and then 1 (haven't tested this, but should work).

Either way, if you click on the new instance, you'll are warned that

This instance is preemptible and will live at most 24 hours. Learn more

Manage the premptible pool

Once you have a pre-emptible pool you can resize it from 0 to anything up to your regional quota.

resize reserve pool to 0

gcloud container clusters resize CLUSTER --node-pool preemptible-reserve --size 0

resize reserve pool to 2

gcloud container clusters resize iodine --node-pool preemptible-reserve --size 2

This is also for "restarting" the reserve in case you resized it to 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment