Skip to content

Instantly share code, notes, and snippets.

@mohanpedala
Last active June 25, 2019 20:47
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mohanpedala/72afb1c89f7227a86cce45bd903ea354 to your computer and use it in GitHub Desktop.
Save mohanpedala/72afb1c89f7227a86cce45bd903ea354 to your computer and use it in GitHub Desktop.
k8s Metrics and Horizontal Pod Scaling

Create a Kubernetes Metrics Server

  1. To clone the GitHub repository of metrics-server, run the following command:
git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/
  1. To install Metrics Server from the root of the Metrics Server directory, run the following command:
kubectl create -f deploy/1.8+/
  1. To confirm that Metrics Server is running, run the following command:
kubectl get pods -n kube-system

The output should look similar to the following:

$ kubectl get pods -n kube-system | grep metrics-server
metrics-server-85cc795fbf-79d72   1/1     Running   0          22s

Error handling with metrics server

  • error: metrics not available yet:
    • Check the pod logs
    kubectl logs -f <metrics-server-pod-name> -n kube-system
    
    • If you see the below error
      unable to fetch metrics from Kubelet docker-for-desktop (docker-for-desktop): Get https://docker-for-desktop:10250/stats/summary/: x509: certificate signed by unknown authority
      
    • Navigate to cd metrics-server/deploy/1.8+/ in the metrics server repo.
    • Open file metrics-server-deployment.yaml and replace it with the below lines.
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: metrics-server
        namespace: kube-system
      ---
      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: metrics-server
        namespace: kube-system
        labels:
          k8s-app: metrics-server
      spec:
        selector:
          matchLabels:
            k8s-app: metrics-server
        template:
          metadata:
            name: metrics-server
            labels:
              k8s-app: metrics-server
          spec:
            serviceAccountName: metrics-server
            volumes:
            # mount in tmp so we can safely use from-scratch images and/or read-only containers
            - name: tmp-dir
              emptyDir: {}
            containers:
            - name: metrics-server
              image: k8s.gcr.io/metrics-server-amd64:v0.3.1
              imagePullPolicy: Always
              command:                  # Adding command to skip secure way of communication between metrics-server and kubelet
              - /metrics-server       
              - --kubelet-insecure-tls
              volumeMounts:
              - name: tmp-dir
                mountPath: /tmp
      

Test Horizontal Pod Scaling

For details

  1. Horizontal Pod Autoscaler
  2. Horizontal Pod Autoscaler Walkthrough

Create a php-apache deployment and a service

  1. To create a php-apache deployment, run the following command:
kubectl create deployment php-apache --image=k8s.gcr.io/hpa-example
  1. To set the CPU requests, run the following command:
kubectl patch deployment php-apache -p='{"spec":{"template":{"spec":{"containers":[{"name":"hpa-example","resources":{"requests":{"cpu":"200m"}}}]}}}}'

Important: If you don't set the value for cpu correctly, then the CPU utilization metric for the pod won't be defined and the HPA can't scale.

  1. To expose the deployment as a service, run the following command:
kubectl create service clusterip php-apache --tcp=80
  1. To create an HPA, run the following command:
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
  1. To confirm that the HPA was created, run the following command.
kubectl get hpa
  1. To create a pod to connect to the deployment that you created earlier, run the following command:
kubectl run --generator=run-pod/v1 -i --tty load-generator --image=busybox /bin/sh
  1. To test a load on the pod in the namespace that you used in step 1, run the following script:
while true; do wget -q -O- http://php-apache; done
  1. To see how the HPA scales the pod based on CPU utilization metrics, run the following command (preferably from another terminal window):
kubectl get hpa -w
The Metrics Server is now up and running, and you can use it to get resource-based metrics.
  1. To clean up the resources used for testing the HPA, run the following commands:
kubectl delete hpa,service,deployment php-apache
kubectl delete pod load-generator
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment