Skip to content

Instantly share code, notes, and snippets.

@rajula96reddy
Last active June 12, 2018 12:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rajula96reddy/dfdec14beac20859db4b47b61e79b7d2 to your computer and use it in GitHub Desktop.
Save rajula96reddy/dfdec14beac20859db4b47b61e79b7d2 to your computer and use it in GitHub Desktop.

The total available host memory for allocation is 3906 MiB and cpus available are 2. So I tried doing two different allocations.

Note: Though I have set the memory to the maximum extent possible, free -m is not showing it all. Cluster config - Controller, Scheduler, Apiserver on Master and Kubelet, Proxy on Worker node. Container run time is Docker, networking is Flannel.

Master Node - 2604 MiB & Worker Node - 1302 MiB

Refer figure master1.png and worker1.png for configuration details
free -m
Master

root@k8s-master:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           2344         399         325           0        1619        1915
Swap:             0           0           0

Worker

root@k8s-worker:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           1064         195         478           0         391         853
Swap:             0           0           0

Analysis
Try 1

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    8h        v1.9.8

real    2m38.878s
user    0m2.920s
sys     0m0.802s

root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m21.307s
user    0m0.808s
sys     0m0.371s

Try 2

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    8h        v1.9.8

real    0m30.438s
user    0m1.488s
sys     0m0.467s
root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m34.570s
user    0m2.274s
sys     0m0.478s

Try 3

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    8h        v1.9.8

real    1m22.643s
user    0m1.356s
sys     0m0.526s
root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m29.020s
user    0m1.631s
sys     0m0.406s

Master Node - 1952 MiB & Worker Node - 1952 MiB

Refer figure master2.png and worker2.png for configuration details

free -m
Master

root@k8s-master:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           1705         120        1308           0         276        1562
Swap:             0           0           0

Worker

root@k8s-worker:~$ free -m
              total        used        free      shared  buff/cache   available
Mem:           1705         147        1290           0         268        1535
Swap:             0           0           0

Analysis
Try 1

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    8h        v1.9.8

real    1m4.394s
user    0m1.432s
sys     0m0.606s
root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m15.081s
user    0m1.217s
sys     0m0.329s

Try 2

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    8h        v1.9.8

real    0m21.384s
user    0m1.487s
sys     0m0.315s
root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m20.229s
user    0m1.602s
sys     0m0.335s

Try 3

root@k8s-master:~# time kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-worker   Ready     <none>    9h        v1.9.8

real    0m23.189s
user    0m1.537s
sys     0m0.314s
root@k8s-master:~# time kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}

real    0m21.695s
user    0m1.579s
sys     0m0.340s
@mfriesenegger
Copy link

Thank you for this info. There is a mentor meeting today where I have raised the question about requesting more memory for your and Asish's L1CC guests. I will let you know ASAP!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment