Skip to content

Instantly share code, notes, and snippets.

@fgimenez
Created May 18, 2021 17:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fgimenez/9d6ad24806a90c59666a48595d66fe59 to your computer and use it in GitHub Desktop.
Save fgimenez/9d6ad24806a90c59666a48595d66fe59 to your computer and use it in GitHub Desktop.
$ kubectl describe node 10.240.128.17
Name: 10.240.128.17
Roles: <none>
Labels: arch=amd64
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=mx2.4x32
beta.kubernetes.io/os=linux
dedicated=ghproxy
failure-domain.beta.kubernetes.io/region=us-south
failure-domain.beta.kubernetes.io/zone=us-south-3
ibm-cloud.kubernetes.io/ha-worker=true
ibm-cloud.kubernetes.io/iaas-provider=g2
ibm-cloud.kubernetes.io/instance-id=0737_1fc3797b-846c-4014-8abc-966f3c46f02e
ibm-cloud.kubernetes.io/internal-ip=10.240.128.17
ibm-cloud.kubernetes.io/machine-type=mx2.4x32
ibm-cloud.kubernetes.io/os=UBUNTU_18_64
ibm-cloud.kubernetes.io/region=us-south
ibm-cloud.kubernetes.io/sgx-enabled=false
ibm-cloud.kubernetes.io/subnet-id=0737-cba49896-e154-4090-88c1-fdab6fbb79bc
ibm-cloud.kubernetes.io/worker-id=kube-bubc2gcd002mlnbc5fpg-kubevirtstg-default-0000073b
ibm-cloud.kubernetes.io/worker-pool-id=bubc2gcd002mlnbc5fpg-f421c23
ibm-cloud.kubernetes.io/worker-pool-name=default
ibm-cloud.kubernetes.io/worker-version=1.20.5_1535
ibm-cloud.kubernetes.io/zone=us-south-3
kubernetes.io/arch=amd64
kubernetes.io/hostname=10.240.128.17
kubernetes.io/os=linux
node.kubernetes.io/instance-type=mx2.4x32
topology.kubernetes.io/region=us-south
topology.kubernetes.io/zone=us-south-3
type=vm
zone=ci
Annotations: csi.volume.kubernetes.io/nodeid: {"vpc.block.csi.ibm.io":"kube-bubc2gcd002mlnbc5fpg-kubevirtstg-default-0000073b"}
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 10.240.128.17/24
projectcalico.org/IPv4IPIPTunnelAddr: 172.17.121.1
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 17 Apr 2021 10:37:58 +0200
Taints: node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: 10.240.128.17
AcquireTime: <unset>
RenewTime: Tue, 18 May 2021 19:00:44 +0200
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 18 May 2021 14:28:48 +0200 Tue, 18 May 2021 14:28:48 +0200 CalicoIsUp Calico is running on this node
MemoryPressure False Tue, 18 May 2021 19:00:21 +0200 Sun, 02 May 2021 11:04:54 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Tue, 18 May 2021 19:00:21 +0200 Tue, 18 May 2021 18:51:17 +0200 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Tue, 18 May 2021 19:00:21 +0200 Sun, 02 May 2021 11:04:54 +0200 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 18 May 2021 19:00:21 +0200 Sun, 02 May 2021 11:04:54 +0200 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.240.128.17
ExternalIP: 10.240.128.17
Hostname: 10.240.128.17
Capacity:
cpu: 4
ephemeral-storage: 102821812Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32942180Ki
pods: 110
Allocatable:
cpu: 3910m
ephemeral-storage: 94234134186
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 29144164Ki
pods: 110
System Info:
Machine ID: 3b8c92e6d7624d6a9923a5d4cded9814
System UUID: 3B8C92E6-D762-4D6A-9923-A5D4CDED9814
Boot ID: 50ed1b28-ebfb-441a-8896-768fba592ce9
Kernel Version: 4.15.0-140-generic
OS Image: Ubuntu 18.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.4
Kubelet Version: v1.20.5+IKS
Kube-Proxy Version: v1.20.5+IKS
ProviderID: ibm://2be0cd841378412882ec2fb4a99951e2///bubc2gcd002mlnbc5fpg/kube-bubc2gcd002mlnbc5fpg-kubevirtstg-default-0000073b
Non-terminated Pods: (29 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
ci-search search-0 100m (2%) 0 (0%) 3Gi (10%) 8Gi (28%) 7h44m
ci-search search-1 100m (2%) 0 (0%) 3Gi (10%) 8Gi (28%) 90m
ibm-observe logdna-agent-xg8tb 20m (0%) 0 (0%) 500Mi (1%) 500Mi (1%) 8h
ibm-observe sysdig-agent-lk5r4 600m (15%) 2 (51%) 512Mi (1%) 1536Mi (5%) 8h
ibm-system catalog-operator-7dc6898b7c-pf777 10m (0%) 0 (0%) 80Mi (0%) 0 (0%) 7h38m
kube-system calico-node-5j292 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 31d
kube-system calico-typha-6d758555cf-pml5h 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 31d
kube-system coredns-86b8b69649-5sskt 100m (2%) 0 (0%) 70Mi (0%) 400Mi (1%) 31d
kube-system ibm-master-proxy-static-10.240.128.17 25m (0%) 300m (7%) 32M (0%) 512M (1%) 31d
kube-system ibm-vpc-block-csi-controller-0 75m (1%) 750m (19%) 150Mi (0%) 750Mi (2%) 7h42m
kube-system ibm-vpc-block-csi-node-6tnq9 35m (0%) 350m (8%) 80Mi (0%) 400Mi (1%) 31d
kube-system ibmcloud-iks-debug-daemonset-sphg2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8h
kube-system metrics-server-67b85b6c9-vllsl 121m (3%) 216m (5%) 186Mi (0%) 436Mi (1%) 19d
kube-system public-crbubc2gcd002mlnbc5fpg-alb2-7f7ccbccdd-fbk9x 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 7h38m
kube-system public-ingress-migrator-8449b665fc-zm55z 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 31d
kube-system vpn-6f9fd8f888-mxn2f 5m (0%) 0 (0%) 5Mi (0%) 0 (0%) 19d
kubevirt-prow crier-6f74f7ccc-8q482 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7h44m
kubevirt-prow docker-mirror-proxy-5f747948-zfsbj 0 (0%) 0 (0%) 3Gi (10%) 3Gi (10%) 7h42m
kubevirt-prow gcsweb-795cc9bc69-nxfbv 100m (2%) 100m (2%) 128Mi (0%) 128Mi (0%) 7h41m
kubevirt-prow ghproxy-699684c75d-w2dh4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d7h
kubevirt-prow greenhouse-5485c77c94-5tbzt 0 (0%) 0 (0%) 3Gi (10%) 3Gi (10%) 99m
kubevirt-prow pushgateway-proxy-59bcd95455-dpmzk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31d
kubevirt-prow rehearse-6d88d998bb-zn9vj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7h40m
kubevirt-prow release-blocker-7c574f8fb4-mr5sl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
kubevirt-prow statusreconciler-7b48cd87f6-qfzt6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32h
monitoring node-exporter-prometheus-node-exporter-m26s4 100m (2%) 200m (5%) 60Mi (0%) 100Mi (0%) 31d
monitoring prometheus-prometheus-stack-kube-prom-prometheus-0 600m (15%) 800m (20%) 2098Mi (7%) 3122Mi (10%) 31d
monitoring prometheus-stack-kube-prom-operator-7dff676db4-gbfp5 100m (2%) 200m (5%) 100Mi (0%) 200Mi (0%) 31d
sippy sippy-0 0 (0%) 0 (0%) 1000Mi (3%) 2000Mi (7%) 7h40m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 2611m (66%) 4916m (125%)
memory 18071058Ki (62%) 33370400Ki (114%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FreeDiskSpaceFailed 36m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 9382762905 bytes, but freed 5948492312 bytes
Warning FreeDiskSpaceFailed 31m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 7134988697 bytes, but freed 0 bytes
Warning ImageGCFailed 31m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 7134988697 bytes, but freed 0 bytes
Warning ImageGCFailed 26m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 8110049689 bytes, but freed 317164 bytes
Warning FreeDiskSpaceFailed 26m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 8110049689 bytes, but freed 317164 bytes
Warning ImageGCFailed 21m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 14458665369 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 21m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 14458665369 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 16m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 8232741273 bytes, but freed 0 bytes
Warning ImageGCFailed 16m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 8232741273 bytes, but freed 0 bytes
Normal NodeHasNoDiskPressure 15m (x2 over 33m) kubelet, 10.240.128.17 Node 10.240.128.17 status is now: NodeHasNoDiskPressure
Warning ImageGCFailed 10m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 13992118681 bytes, but freed 17248513 bytes
Warning FreeDiskSpaceFailed 10m kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 13992118681 bytes, but freed 17248513 bytes
Normal NodeHasDiskPressure 9m32s (x3 over 41m) kubelet, 10.240.128.17 Node 10.240.128.17 status is now: NodeHasDiskPressure
Warning FreeDiskSpaceFailed 5m50s kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 18610280857 bytes, but freed 71605281 bytes
Warning ImageGCFailed 5m50s kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 18610280857 bytes, but freed 71605281 bytes
Warning EvictionThresholdMet 4m5s (x6 over 7m35s) kubelet, 10.240.128.17 Attempting to reclaim ephemeral-storage
Warning FreeDiskSpaceFailed 39s kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 14296660377 bytes, but freed 1017229877 bytes
Warning ImageGCFailed 39s kubelet, 10.240.128.17 failed to garbage collect required amount of images. Wanted to free 14296660377 bytes, but freed 1017229877 bytes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment