Skip to content

Instantly share code, notes, and snippets.

@aatalrashid
Forked from savishy/Kubernetes - Tips.md
Last active May 11, 2021 09:16
Show Gist options
  • Save aatalrashid/2dd3eafa20384e8ba6f3e0ea6abd5574 to your computer and use it in GitHub Desktop.
Save aatalrashid/2dd3eafa20384e8ba6f3e0ea6abd5574 to your computer and use it in GitHub Desktop.
Kubernetes Cheatsheet
  • gcloud config list: List the current configuration

E.g. output:

[compute]
zone = asia-east1-a

[core]
account = vish@example.com
disable_usage_reporting = False
project = gcloud-testing-

Your active configuration is: [example]
  • gcloud config set [ARGS] : Set a configuration value. Use --help to list the configuration values you can set.
  • E.g. gcloud config set compute/zone asia-east1-a sets the compute category, zone sub-category to asia-east-1a.
  • gcloud compute zones list: list all zones. The value of the zone above must be one of the zones from the output of this command.
  • gcloud container clusters create [cluster]: Create a Google Cloud Container cluster for use with Kubernetes.
  • gcloud container clusters list: List Google Cloud Container Clusters.

E.g. output

NAME       LOCATION      MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
petclinic  asia-east1-a  1.7.8-gke.0     35.1.1.73  n1-standard-1  1.7.8-gke.0   3          RUNNING

Google Cloud Quickstart for Kubernetes

reference

Create a Google Cloud Cluster

# Set project name, email etc.
gcloud init 

# Go to the console and add a credit card.
# Enable billing.

# Set compute zone
gcloud config set compute/zone asia-east1-a

# create cluster named petclinic
gcloud container clusters create petclinic 

#  generate a kubeconfig entry in your environment.
gcloud container clusters get-credentials cluster-name

Create and run a Kubernetes 'Deployment' to run a Dockerized application

Make sure kubectl can see the cluster. Output below should point to the cluster IP address.

kubectl cluster-info

E.g. output:

Kubernetes master is running at https://35.185.138.73
GLBCDefaultBackend is running at https://35.185.138.73/api/v1/namespaces/kube-system/services/default-http-backend/proxy
Heapster is running at https://35.185.138.73/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://35.185.138.73/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://35.185.138.73/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

Run a Dockerized application in a pod.

kubectl run tomcat-petclinic --image=docker.io/savishy/tomcat-petclinic:latest --port 8080

E.g. output:

deployment "tomcat-petclinic" created

Wait until the deployment is available. Keep running the following commands.

kubectl get deployments
kubectl get pods

You will see this output when deployment is not ready:

$ kubectl get pods
NAME                               READY     STATUS              RESTARTS   AGE
tomcat-petclinic-375607832-gw6gp   0/1       ContainerCreating   0          1m

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
tomcat-petclinic   1         1         1            0           1m

Once the deployment is ready you will see:


$ kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
tomcat-petclinic-375607832-gw6gp   1/1       Running   0          2m

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
tomcat-petclinic   1         1         1            1           2m

Access the application via a Load Balancer

Create a load-balancer for the deployment tomcat-petclinic:

kubectl expose deployment tomcat-petclinic --type="LoadBalancer"

Wait for the External IP to show up in output below:

kubectl get service tomcat-petclinic

E.g.

$ kubectl get service tomcat-petclinic
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
tomcat-petclinic   LoadBalancer   10.39.252.121   35.194.224.53   8080:31394/TCP   1m

Now access the application on http://35.194.224.53:8080!

Pause/Resume the deployment

reference

kubectl rollout status deployment/tomcat-petclinic

Update CPU and Memory limits for each pod:

kubectl set resources deployment tomcat-petclinic --limits=cpu=200m,memory=300Mi

You will see that this triggers another rollout

$ kubectl rollout status deployment/tomcat-petclinic

Waiting for rollout to finish: 0 of 1 updated replicas are available...
deployment "tomcat-petclinic" successfully rolled out

Instead, you can pause a deployment, modify resources, then resume the deployment:

$ kubectl rollout pause deployment/tomcat-petclinic
deployment "tomcat-petclinic" paused

$ kubectl set resources deployment tomcat-petclinic --limits=cpu=200m,memory=200Mi
deployment "tomcat-petclinic" resource requirements updated

$ kubectl rollout resume deployment/tomcat-petclinic
deployment "tomcat-petclinic" resumed

❗ Make sure you set a valid value for CPU or Memory resources. Too low values can auto-kill and auto-restart your pods. Monitor events using kubectl get pods and kubectl get events.

Create an autoscaler

Auto-scale the deployment when CPU percent reaches 50% of the cpu resources specified above (50% of 200m, i.e 100m).

kubectl autoscale deployment tomcat-petclinic --max=3 --cpu-percent=50

Get status of autoscaling. Right now, you will only see replicas=1.

kubectl get hpa

Generate Load

reference

Open a different command prompt. Type:

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://ADDRESS_FOR_PETCLINIC; done

The above command starts generating load on the URL ADDRESS_FOR_PETCLINIC.

Start monitoring the AutoScaler, the Deployment, and the Pods.

hpa: before

$ kubectl get hpa
NAME               REFERENCE                     TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
tomcat-petclinic   Deployment/tomcat-petclinic   0% / 50%   1         3         1          2m

hpa: 2 replica created after 7 minutes.

$ kubectl get hpa
NAME               REFERENCE                     TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
tomcat-petclinic   Deployment/tomcat-petclinic   66% / 50%   1         3         2          7m

hpa: 3 replicas created after 13 minutes.

$ kubectl get hpa
NAME               REFERENCE                     TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
tomcat-petclinic   Deployment/tomcat-petclinic   36% / 50%   1         3         3          13m

deployments: before

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
load-generator     1         1         1            1           1m
tomcat-petclinic   1         1         1            1           53m

deployments: after

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
load-generator     1         1         1            1           4m
tomcat-petclinic   2         2         2            2           56m

pods: before

$ kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
load-generator-3044827360-xvxz5     1/1       Running   0          3m
tomcat-petclinic-1800253800-55kdt   1/1       Running   0          21s

pods: after

$ kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
load-generator-3044827360-xvxz5     1/1       Running   0          3m
tomcat-petclinic-1800253800-55kdt   1/1       Running   0          21s
tomcat-petclinic-1800253800-p8wg3   1/1       Running   0          27m

You have successfully autoscaled the application!

Troubleshooting, Notes, Gotchas

API must be enabled on Google Project for some commands to run.

reference

If you try to execute a command e.g. gcloud compute zones list without API being enabled, you might be asked to enable API (e.g. output below).

API [compute.googleapis.com] not enabled on project [50875613725].
Would you like to enable and retry?  (Y/n)?  Y

Billing must be enabled for several commands to run.

reference

Otherwise you might receive an error like:

Enabling service compute.googleapis.com on project 50875613725...
ERROR: (gcloud.compute.zones.list) FAILED_PRECONDITION: Operation does not satisfy the following requirements: billing-enabled {Billing must be enabled for activation of service '' in project 'gcloud-testing-vish' to proceed., https://console.developers.google.com/project/gcloud-testing-vish/settings}

Minikube Setup

minikube start:

  • Downloads a Virtualbox VM ISO for minikube
  • Starts a VM
  • Runs a local Kubernetes cluster using this VM.

Initial Setup

kubectl cluster-info: Check that kubectl is properly configured.

Kubernetes master is running at https://192.168.99.100:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 Runs a Dockerized hello-world service with 2 replicas.

Getting Information

  • kubernetes get: List all get commands.
  • kubernetes describe: List all describe commands.
  • kubectl get services: Get list of services running in cluster
  • kubectl get deployments: Get summarized current state and desired state of replicas of deployments.
  • kubectl describe deployments [DEPLOYMENT_NAME] Get details of all deployments, or optionally one deployment named DEPLOYMENT_NAME.
  • kubectl describe hpa: Describe the HorizontalPodAutoscaler objects.

Scaling

  • kubectl scale --replicas=2 deployments/hello-minikube: Scale a deployment named hello-minikube to 2 replicas.
  • kubectl autoscale deployment foo --min=2 --max=5 --cpu-percent=80: Create an Horizontal Pod Autoscaler that automatically scales the deployment between 2 and 5 replicas when CPU utilization exceeds 80%.

Heapster (and other Minikube Addons)

  • minikube addons enable heapster: Enable Heapster within Minikube (it's disabled by default)
  • minikube addons open dashboard: Open the Kubernetes Dashboard (opens in a browser window).
  • minikube addons open heapster: Open the Heapster Dashboard (opens Grafana in a browser window).

About kubeconfig and Sharing kubeconfigs

When you create a Google Cloud Container cluster with gcloud container clusters create it also generates a kubeconfig entry. See the output below:

Creating cluster petclinic...done.
Created [https://container.googleapis.com/v1/projects/gcloud-testing-vish/zones/asia-east1-a/clusters/petclinic].
kubeconfig entry generated for petclinic.
NAME       LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
petclinic  asia-east1-a  1.7.8-gke.0     35.185.169.133  n1-standard-1  1.7.8-gke.0   3          RUNNING

The kubeconfig contains - among other things - the information necessary for kubectl to access the cluster.

Kubeconfig Location

  1. Windows: %HOME%\.kube\config (simple text file)
  2. Linux: $HOME/.kube/config

Kubeconfig Load Options

  1. By default, the $HOME/.kube/config file is loaded.
  2. The config can also be pasted to pwd (wherever kubectl command is running from).
  3. Lastly, kubeconfig location can be specified via environment variable e.g. export KUBECONFIG=/path/to/.kube/config.

Migrating KubeConfig

Send the $HOME/.kube/config file to the desired server.

References

  1. More about KubeConfig - https://kubernetes-v1-4.github.io/docs/user-guide/kubeconfig-file/
  2. Sharing Cluster access - https://kubernetes-v1-4.github.io/docs/user-guide/sharing-clusters/

References

  1. https://kubernetes.io/docs/getting-started-guides/minikube/
  2. https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
  3. https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
  4. https://kubernetes.io/docs/concepts/workloads/pods/pod/
Jargon Meaning
kubectl Commandline tool to use Kubernetes.
minikube A flavor of Kubernetes, Minikube allows you to play around with Kubernetes on your local machine.
It does not require a cloud (e.g. Google Cloud) account.
gcloud or Google Cloud SDK Allows you to interact with Google Cloud (similar to AWS CLI).
Additionally, can also be used to manage Kubernetes clusters on Google Cloud.
Kubernetes Object All of K8S is an Object-Oriented Model. Objects are units in K8S that you can interact with.
pod A container or containers that represent a single copy of an application. Pods run within nodes. When pods die, they die.
rollout A rollout manages the rollouts of resources such as deployments.
replication controller Controls Replication. Ensures that N replicas of a pod are always available.
replica An instance of a pod managed by the ReplicationController.
replicaset
service Makes sure that pods are exposed to outside world.
node An individual worker machine. May be a VM or physical machine. Formerly called minion.
deployment Short for "deployment controller". A Kubernetes Object that allows you to specify the desired state of a pod.
Kubernetes recommends that you do not manage pods directly. Instead use a "Controller" such as a deployment.
Heapster Tool for cluster resource monitoring. Provides a UI driven by Grafana.
kompose A tool that helps convert Docker Compose stacks for use in Kubernetes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment