Skip to content

Instantly share code, notes, and snippets.

@karampok
Last active November 24, 2021 10:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save karampok/d74b7aa10590dd7c10101a501b519054 to your computer and use it in GitHub Desktop.
Save karampok/d74b7aa10590dd7c10101a501b519054 to your computer and use it in GitHub Desktop.

Question

if hugepages are enabled in the node, when you run oc adm top node, the output shows huge pages as used memory. So the output is (memory used + huge pages). Is that expected?

Answer

The output of adm top node does not include the hugepage in the calculation. Here is a small analysis

In a single node v4.8 OCP cluster with PAO, 32Gi of memory and 4 pages of size 1Gi.

oc adm top node snonode.testcluster.vtelco-5g.lab -v6
... loader.go:372] Config loaded from file:  /home/kka/.kube/testclusterconfig
... top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
... round_trippers.go:454] GET https://api.testcluster.vtelco-5g.lab:6443/api?timeout=32s 200 OK in 467 milliseconds
... round_trippers.go:454] GET https://api.testcluster.vtelco-5g.lab:6443/apis?timeout=32s 200 OK in 110 milliseconds
... round_trippers.go:454] GET https://api.testcluster.vtelco-5g.lab:6443/apis/metrics.k8s.io/v1beta1/nodes/snonode.testcluster.vtelco-5g.lab 200 OK in 133 milliseconds
... round_trippers.go:454] GET https://api.testcluster.vtelco-5g.lab:6443/api/v1/nodes/snonode.testcluster.vtelco-5g.lab 200 OK in 115 milliseconds
NAME                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
snonode.testcluster.vtelco-5g.lab   2463m        20%    13265Mi         49%

As we can see, under the hood it does some calls

# Current usage is taken from here
oc get --raw /apis/metrics.k8s.io/v1beta1/nodes/snonode.testcluster.vtelco-5g.lab | jq .usage
{
  "cpu": "2060m",
  "memory": "13678392Ki"
}

# Total usage is taken from here
oc get --raw /api/v1/nodes/snonode.testcluster.vtelco-5g.lab | jq .status.allocatable.memory
"27601104Ki"

# Result is 13678392 / 27601104 ~= 0.49

The total allocatable value is calculated

oc get --raw /api/v1/nodes/snonode.testcluster.vtelco-5g.lab | jq .status.capacity
{
  "cpu": "16",
  "ephemeral-storage": "125293548Ki",
  "hugepages-1Gi": "4Gi",  <--- this
  "hugepages-2Mi": "0",
  "management.workload.openshift.io/cores": "16k",
  "memory": "32921808Ki", <--- this
  "pods": "250"
}

 oc get kubeletconfig performance-performance-sno -o json | jq .spec.kubeletConfig.kubeReserved.memory
"500Mi"
oc get kubeletconfig performance-performance-sno -o json | jq .spec.kubeletConfig.systemReserved.memory
"500Mi"
cott get kubeletconfig performance-performance-sno -o json | jq .spec.kubeletConfig.evictionHard
{
  "memory.available": "100Mi"
}

32921808Ki - 4Gi - 500Mi - 500Mi -100Mi = 27601104Ki
# Capacity- hugepages- kubeReserved - systemReserved = total allocatable memory

https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/

About hugepages, you only get the information per Node if the are allocated or not.

oc describe node  snonode.testcluster.vtelco-5g.lab

...
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                                Requests           Limits
  --------                                --------           ------
  cpu                                     1 (8%)             1 (8%)
  memory                                  13922650624 (49%)  4758096384 (16%)
  ephemeral-storage                       0 (0%)             0 (0%)
  hugepages-1Gi                           4Gi (100%)         4Gi (100%) <---
  hugepages-2Mi                           0 (0%)             0 (0%)
  management.workload.openshift.io/cores  2477               2477
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment