Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save javdl/9cb4b4fd12c2c6ea9e9c8cbcf2235695 to your computer and use it in GitHub Desktop.
Save javdl/9cb4b4fd12c2c6ea9e9c8cbcf2235695 to your computer and use it in GitHub Desktop.

Kubernetes / Google Cloud Platform (GCP) configurations

[[TOC]]

Google Container Engine Docs

https://cloud.google.com/container-engine/docs/

When we have more containers running on Kubernetes we should also use Multi-zone clustering: https://cloud.google.com/container-engine/docs/multi-zone-clusters

https://acotten.com/post/1year-kubernetes

Size of master and master components

On GCE/GKE and AWS, kube-up automatically configures the proper VM size for your master depending on the number of nodes in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are 1-5 nodes: n1-standard-1 6-10 nodes: n1-standard-2 11-100 nodes: n1-standard-4 101-250 nodes: n1-standard-8 251-500 nodes: n1-standard-16 more than 500 nodes: n1-standard-32

https://cloud.google.com/container-engine/docs/cluster-autoscaler

Node pools

Create a node pool for builds (needs the scope cloud-platform otherwise Meteor builds do not work)

  gcloud container node-pools create preemptible24h --cluster=kube-cluster --disk-size=100 --image-type=cos --machine-type=n1-standard-4 --num-nodes=3 --scopes=default,cloud-platform -z europe-west1-d --preemptible

Create with proper scopes

  gcloud container node-pools create preemptible \
   --cluster=kube-cluster --disk-size=100 --image-type=cos \
   --disk-type=pd-ssd \
   --machine-type=n1-standard-16 --num-nodes=2 \
   --scopes=default,cloud-platform,pubsub -z europe-west1-d --preemptible

Adjust scope

  gcloud container node-pools create adjust-scope \
     --cluster kube7 --zone europe-west1-d \
     --machine-type n1-standard-1 \
     --num-nodes 3 \
     --scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/pubsub

Delete

gcloud container node-pools delete default-pool
--cluster=kube7 --zone europe-west1-d

Ingress / Egress

https://cloud.google.com/compute/docs/vpc/firewalls#sources_or_destinations_for_the_rule

Published: Thu 30 March 2017 in Cloud

In this post, I'll describe how to remove a particular node from a Kubernetes cluster on GKE. Why would you want to do that? In my case, I'm running jupyterhub and I need to do that as part of implementing cluster scaling. That's probably a rare need, but it helped me understand more about the GCE structures behind a Kubernetes cluster.

So let's start. The first thing you need to do is:

Drain your node

Let's look at my nodes:

  $ kl get nodes
  NAME                                      STATUS    AGE
  gke-jcluster-default-pool-9cc4e660-rx9p   Ready     1d
  gke-jcluster-default-pool-9cc4e660-xr4z   Ready     2d

I want to remove rx9p. I'll first drain it:

$ kl drain gke-jcluster-default-pool-9cc4e660-rx9p --force node "gke-jcluster-default-pool-9cc4e660-rx9p" cordoned error: pods with local storage (use --delete-local-data to override): jupyter-petko-1

Great, the node is now drained. Next is:

Removing the GCE VM

Your Kubernetes cluster runs in an instance group. We'll need to know what this group is. Here's how to do it from the command line.

  $ export GROUP_ID=$(gcloud container clusters describe jcluster --format json | jq  --raw-output '.instanceGroupUrls[0]' | rev | cut -d'/' -f 1 | rev)
  $ echo $GROUP_ID
  gke-jcluster-default-pool-9cc4e660-grp

Let's check my instances:

  $ gcloud compute instances list
  NAME                                     ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
  gke-jcluster-default-pool-9cc4e660-rx9p  us-central1-b  n1-standard-1               10.128.0.2   104.198.174.222  RUNNING
  gke-jcluster-default-pool-9cc4e660-xr4z  us-central1-b  n1-standard-1               10.128.0.4   104.197.237.135  RUNNING

If I just run gcloud compute instances delete that won't work! That's because I have an instance group of size 2 and if I delete one of the machines, GCE will start a new one. I have to use the gcloud compute instance-groups managed delete-instances command, followed by gcloud compute instance-groups managed wait-until-stable if I want to wait until the job is done.

Let's see how that looks like:

$ gcloud compute instance-groups managed delete-instances $GROUP_ID --instances=gke-jcluster-default-pool-9cc4e660-rx9p

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment