Skip to content

Instantly share code, notes, and snippets.

@askb
Created December 19, 2021 04:36
Show Gist options
  • Save askb/cb1bb7843e2cbdfd33043f0a9056632c to your computer and use it in GitHub Desktop.
Save askb/cb1bb7843e2cbdfd33043f0a9056632c to your computer and use it in GitHub Desktop.
Debugging ODL K8s cluster nodes notReady
kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default sdnc-opendaylight-0 0/1 Pending 0 37m <none> <none> <none> <none>
kube-system calico-kube-controllers-7b67cb9dd4-vnfm8 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system calico-node-9kwq4 1/1 Running 0 40m 10.0.0.50 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 <none> <none>
kube-system calico-node-jvpgh 1/1 Running 0 40m 10.0.0.59 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 <none> <none>
kube-system calico-node-wp484 1/1 Running 0 44m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system coredns-57995474d5-6qbjh 1/1 Running 0 44m 10.100.45.130 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system coredns-57995474d5-xtcxb 1/1 Running 0 44m 10.100.45.129 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system csi-cinder-controllerplugin-0 5/5 Running 0 43m 10.100.45.128 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system dashboard-metrics-scraper-7674b9d54f-dnqsl 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system k8s-keystone-auth-8cjmz 1/1 Running 0 43m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system kube-dns-autoscaler-7967dcdbd7-m5ct6 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system kubernetes-dashboard-bfc6ccfdf-t4jl2 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system openstack-cloud-controller-manager-vh77v 0/1 ImagePullBackOff 0 44m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kubectl get events --sort-by='.metadata.creationTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
41m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Starting kube-proxy.
41m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 in Controller
41m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 status is now: NodeReady
38m Normal NodeAllocatableEnforced node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Updated Node Allocatable limit across pods
38m Normal NodeHasSufficientPID node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasSufficientPID
38m Normal NodeHasNoDiskPressure node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasNoDiskPressure
38m Normal NodeHasSufficientMemory node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasSufficientMemory
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Starting kubelet.
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Starting kubelet.
38m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 in Controller
38m Normal NodeAllocatableEnforced node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Updated Node Allocatable limit across pods
38m Normal NodeHasSufficientPID node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasSufficientPID
38m Normal NodeHasNoDiskPressure node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasNoDiskPressure
38m Normal NodeHasSufficientMemory node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasSufficientMemory
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Starting kube-proxy.
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Starting kube-proxy.
37m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 in Controller
37m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeReady
37m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeReady
34m Warning FailedScheduling pod/sdnc-opendaylight-0 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
34m Normal SuccessfulCreate statefulset/sdnc-opendaylight create Pod sdnc-opendaylight-0 in StatefulSet sdnc-opendaylight successful
12m Warning FailedScheduling pod/sdnc-opendaylight-0 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
4m38s Warning FailedGetScale horizontalpodautoscaler/sdnc-opendaylight deployments/scale.apps "sdnc-opendaylight" not found
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Ready master 39m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-master,magnum.openstack.org/role=master,node-role.kubernetes.io/master=
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Ready <none> 35m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-worker,magnum.openstack.org/role=worker
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Ready <none> 35m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-worker,magnum.openstack.org/role=worker
> kubectl describe pods
Name: sdnc-opendaylight-0
Namespace: default
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=sdnc
app.kubernetes.io/name=opendaylight
controller-revision-hash=sdnc-opendaylight-5c9c85f8c4
statefulset.kubernetes.io/pod-name=sdnc-opendaylight-0
Annotations: kubernetes.io/psp: magnum.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/sdnc-opendaylight
Init Containers:
updatevolperm:
Image: busybox
Port: <none>
Host Port: <none>
Command:
chown
8181
/data
Environment: <none>
Mounts:
/data from odlvol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st6bd (ro)
Containers:
opendaylight:
Image: nexus3.opendaylight.org:10001/opendaylight/opendaylight:14.2.0
Port: 8181/TCP
Host Port: 0/TCP
Command:
bash
-c
bash -x /scripts/startodl.sh
Readiness: tcp-socket :8181 delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
FEATURES: odl-restconf,odl-restconf-all
JAVA_HOME: /opt/openjdk-11/
JAVA_OPTS: -Xms512m -Xmx2048m
EXTRA_JAVA_OPTS: -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:ParallelGCThreads=3 -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication
Mounts:
/data from odlvol (rw)
/scripts from scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st6bd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: sdnc-opendaylight
Optional: false
odlvol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-st6bd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 30m default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
Warning FailedScheduling 30m default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
@askb
Copy link
Author

askb commented Dec 23, 2021

kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
# Get all worker nodes
kubectl get node --selector='!node-role.kubernetes.io/master'
# Get all namespaces
kubectl get pods --all-namespaces
# Get all events by timestamps
kubectl get events --sort-by=.metadata.creationTimestamp
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
>  && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# kubectl get nodes -o jsonpath='{range .items[*]} {.metadata.name} {" "} {.status.conditions[?(@.type=="Ready")].status} {" "} {.spec.taints} {"\n"} {end}'
#  sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-master-0   True   [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]
#   sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-node-0   True   [{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]
#   sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-node-1   True   [{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]

# Wait for pod to get ready before testing
#kubectl wait -n ${POD_NAME} --for=condition=ready pod --all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment