Skip to content

Instantly share code, notes, and snippets.

@abhioncbr
Last active April 23, 2021 03:51
Show Gist options
  • Save abhioncbr/c01a60f1f2fe2f660704be1c024a89d4 to your computer and use it in GitHub Desktop.
Save abhioncbr/c01a60f1f2fe2f660704be1c024a89d4 to your computer and use it in GitHub Desktop.

We together covered a handful of concepts in the last three posts about knowing, conceptualizing, and understanding pod eviction. As I said, in my first post that it's a complicated topic, and I would like to suggest that if you are reading this post first, please read all the below-listed posts before reading this one.

Although I highly recommend to go through the posts as mentioned earlier but still, for the time-sake or as a refresher, we are going to revisit the concepts quickly; please feel free to skip the section if you already went over the posts or have a fair understanding about the concepts of eviction, QoS class, eviction policies, the priority of a pod, and pod disruption budget. After a concepts refresher, we are going to cover some tips and tricks to avoid the pod eviction and node management. This post is the last one on the pod eviction topic. Okay, let's get started.

Revisiting the pod eviction monster

Execution of a pod on the K8s cluster needs some computing resources of the node; among them, some are sharable resources like CPU, and some are non-sharable like memory or storage. Since the compute resources on a computing device is finite by nature, and hence improper or unplanned usage can lead to the scarcity of resources.

What to do in case of node computing resource scarcity? There are two solutions to the problem. The first one is straightforward; do nothing, which may lead to node Kubelet process failure (unstable cluster). The second one, take some presumptive steps, which never lead to the scarcity of the resources. Kubelet process follows the second option by periodically checking the resource pressure (memory or storage pressure), and in-case of pressure, it tries to free some scarce resource, which eventually results into the pod eviction.

How to decide which pod to evict? Kubelet decide based on which compute resource is under pressure, the priority of a pod and QoS class of the pod. QoS class can be of three types, i.e. Best Effort, Burstable, and Guaranteed. While deciding the candidate pod for eviction, the Kubelet process also considers pods disruption budget. From the last post, I am re-mentioning:

The Kubelet ranks pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by priority, and then by the consumption of the starved compute resource relative to the pods scheduling requests.

Tips and tricks to avoid pod eviction

Like all distributed systems in the Kubernetes cluster also, resource utilization planning and division of the workloads as per the business & priorities are utmost necessary. Determining the correct size of any distributed systems cluster is a very tedious task and is very specific to the company's business needs and their workload nature; however, proper division of resources is standard among all the companies. We are going to cover some of the Kubernetes constructs related to the logical separation of the computing resources.

Resource Quota

ResourceQuota is an import Kubernetes object for constraining the resource utilization on a namespace basis. It is a significant construct, especially when several different teams or business units shared the same Kubernetes cluster. Apart from constraining resource utilization at the namesapce elevel, it also helps in limiting the number of particular type of objects in a namespace. ResourceQuota object is at the namespace level.

  • Compute Resource Quota: following are the supported constructs of compute resource quotas
  • Storage Resource Quota:
  • Object Count Quota: from Kubernetes 1.9, we can limit all standard namespaced resource types. Some common examples are count/services, count/configmaps, count/replicationcontrollers, count/deployments.apps

How can ResourceQuota help in pod eviction?

  • Minimal object count ResourceQuota for default namespace, this could help in K8s objects onboarding in the designated namespace with ResourceQuota.
  • With ResourceQuota, each pod should have to define resource requests & limits.
  • The procurement of the resources by the pod would be limited to the namespace resource quotas. Overprovisioning of resources by pods would result in the eviction of the pod confined to the namespace.

Limit Ranges

Through ResourceQuota construct, we able to limit the computing resources at the namespace level, but we need a construct to manage the resources within a namespace too; otherwise, one pod or container could monopolize the whole namespace. For this problem, we have a solution named as LimitRange. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment