Skip to content

Instantly share code, notes, and snippets.

@askb
Created December 19, 2021 04:36
Show Gist options
  • Save askb/cb1bb7843e2cbdfd33043f0a9056632c to your computer and use it in GitHub Desktop.
Save askb/cb1bb7843e2cbdfd33043f0a9056632c to your computer and use it in GitHub Desktop.
Debugging ODL K8s cluster nodes notReady
kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default sdnc-opendaylight-0 0/1 Pending 0 37m <none> <none> <none> <none>
kube-system calico-kube-controllers-7b67cb9dd4-vnfm8 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system calico-node-9kwq4 1/1 Running 0 40m 10.0.0.50 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 <none> <none>
kube-system calico-node-jvpgh 1/1 Running 0 40m 10.0.0.59 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 <none> <none>
kube-system calico-node-wp484 1/1 Running 0 44m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system coredns-57995474d5-6qbjh 1/1 Running 0 44m 10.100.45.130 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system coredns-57995474d5-xtcxb 1/1 Running 0 44m 10.100.45.129 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system csi-cinder-controllerplugin-0 5/5 Running 0 43m 10.100.45.128 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system dashboard-metrics-scraper-7674b9d54f-dnqsl 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system k8s-keystone-auth-8cjmz 1/1 Running 0 43m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kube-system kube-dns-autoscaler-7967dcdbd7-m5ct6 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system kubernetes-dashboard-bfc6ccfdf-t4jl2 0/1 Pending 0 44m <none> <none> <none> <none>
kube-system openstack-cloud-controller-manager-vh77v 0/1 ImagePullBackOff 0 44m 10.0.0.243 sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 <none> <none>
kubectl get events --sort-by='.metadata.creationTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
41m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Starting kube-proxy.
41m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 in Controller
41m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 status is now: NodeReady
38m Normal NodeAllocatableEnforced node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Updated Node Allocatable limit across pods
38m Normal NodeHasSufficientPID node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasSufficientPID
38m Normal NodeHasNoDiskPressure node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasNoDiskPressure
38m Normal NodeHasSufficientMemory node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeHasSufficientMemory
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Starting kubelet.
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Starting kubelet.
38m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 in Controller
38m Normal NodeAllocatableEnforced node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Updated Node Allocatable limit across pods
38m Normal NodeHasSufficientPID node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasSufficientPID
38m Normal NodeHasNoDiskPressure node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasNoDiskPressure
38m Normal NodeHasSufficientMemory node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeHasSufficientMemory
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Starting kube-proxy.
38m Normal Starting node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Starting kube-proxy.
37m Normal RegisteredNode node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 event: Registered Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 in Controller
37m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 status is now: NodeReady
37m Normal NodeReady node/sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Node sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 status is now: NodeReady
34m Warning FailedScheduling pod/sdnc-opendaylight-0 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
34m Normal SuccessfulCreate statefulset/sdnc-opendaylight create Pod sdnc-opendaylight-0 in StatefulSet sdnc-opendaylight successful
12m Warning FailedScheduling pod/sdnc-opendaylight-0 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
4m38s Warning FailedGetScale horizontalpodautoscaler/sdnc-opendaylight deployments/scale.apps "sdnc-opendaylight" not found
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0 Ready master 39m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-master-0,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-master,magnum.openstack.org/role=master,node-role.kubernetes.io/master=
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0 Ready <none> 35m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-0,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-worker,magnum.openstack.org/role=worker
sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1 Ready <none> 35m v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sandbox-packaging-k8s-odl-depl-ycser6ofq7ic-node-1,kubernetes.io/os=linux,magnum.openstack.org/nodegroup=default-worker,magnum.openstack.org/role=worker
> kubectl describe pods
Name: sdnc-opendaylight-0
Namespace: default
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=sdnc
app.kubernetes.io/name=opendaylight
controller-revision-hash=sdnc-opendaylight-5c9c85f8c4
statefulset.kubernetes.io/pod-name=sdnc-opendaylight-0
Annotations: kubernetes.io/psp: magnum.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/sdnc-opendaylight
Init Containers:
updatevolperm:
Image: busybox
Port: <none>
Host Port: <none>
Command:
chown
8181
/data
Environment: <none>
Mounts:
/data from odlvol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st6bd (ro)
Containers:
opendaylight:
Image: nexus3.opendaylight.org:10001/opendaylight/opendaylight:14.2.0
Port: 8181/TCP
Host Port: 0/TCP
Command:
bash
-c
bash -x /scripts/startodl.sh
Readiness: tcp-socket :8181 delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
FEATURES: odl-restconf,odl-restconf-all
JAVA_HOME: /opt/openjdk-11/
JAVA_OPTS: -Xms512m -Xmx2048m
EXTRA_JAVA_OPTS: -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:ParallelGCThreads=3 -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication
Mounts:
/data from odlvol (rw)
/scripts from scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st6bd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: sdnc-opendaylight
Optional: false
odlvol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-st6bd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 30m default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
Warning FailedScheduling 30m default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
@askb
Copy link
Author

askb commented Dec 21, 2021

> kubectl get deployment -n kube-system  
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
calico-kube-controllers     0/1     1            0           19m
coredns                     2/2     2            2           19m
dashboard-metrics-scraper   0/1     1            0           19m
kube-dns-autoscaler         0/1     1            0           19m
kubernetes-dashboard        0/1     1            0           19m

@askb
Copy link
Author

askb commented Dec 21, 2021

> kubectl get pods -n kube-system
NAME                                         READY   STATUS             RESTARTS   AGE
calico-kube-controllers-7b67cb9dd4-p2zmv     0/1     Pending            0          19m
calico-node-2nsq4                            1/1     Running            0          17m
calico-node-72zd5                            1/1     Running            0          19m
calico-node-89xx6                            1/1     Running            0          17m
coredns-57995474d5-7bsnr                     1/1     Running            0          19m
coredns-57995474d5-p9px9                     1/1     Running            0          19m
csi-cinder-controllerplugin-0                5/5     Running            0          19m
dashboard-metrics-scraper-7674b9d54f-5mk8h   0/1     Pending            0          19m
k8s-keystone-auth-jvtrg                      1/1     Running            0          19m
kube-dns-autoscaler-7967dcdbd7-xwvhv         0/1     Pending            0          19m
kubernetes-dashboard-bfc6ccfdf-ndq7n         0/1     Pending            0          19m
openstack-cloud-controller-manager-mq6t6     0/1     ImagePullBackOff   0          19m

@askb
Copy link
Author

askb commented Dec 21, 2021

> kubectl describe pod openstack-cloud-controller-manager-mq6t6 -n kube-system
Name:         openstack-cloud-controller-manager-mq6t6
Namespace:    kube-system
Priority:     0
Node:         sandbox-packaging-k8s-odl-depl-ccxetovfc52m-master-0/10.0.0.74
Start Time:   Tue, 21 Dec 2021 22:47:58 +0000
Labels:       controller-revision-hash=f68cffd6
              k8s-app=openstack-cloud-controller-manager
              pod-template-generation=1
Annotations:  kubernetes.io/psp: magnum.privileged
Status:       Pending
IP:           10.0.0.74
IPs:
  IP:           10.0.0.74
Controlled By:  DaemonSet/openstack-cloud-controller-manager
Containers:
  openstack-cloud-controller-manager:
    Container ID:  
    Image:         registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/openstack-cloud-controller-manager
      --v=2
      --cloud-config=/etc/kubernetes/cloud-config-occm
      --cloud-provider=openstack
      --cluster-name=3f49ab9f-6ef8-49e2-9df4-c69b88008fb5
      --use-service-account-credentials=true
      --bind-address=127.0.0.1
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/kubernetes from cloudconfig (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-smfq8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cloudconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes
    HostPathType:  
  kube-api-access-smfq8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              node-role.kubernetes.io/master=
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  20m                 default-scheduler  Successfully assigned kube-system/openstack-cloud-controller-manager-mq6t6 to sandbox-packaging-k8s-odl-depl-ccxetovfc52m-master-0
  Normal   Pulling    19m (x4 over 20m)   kubelet            Pulling image "registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0"
  Warning  Failed     19m (x4 over 20m)   kubelet            Failed to pull image "registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0": rpc error: code = Unknown desc = Error response from daemon: manifest for registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0 not found: manifest unknown: manifest unknown
  Warning  Failed     19m (x4 over 20m)   kubelet            Error: ErrImagePull
  Warning  Failed     18m (x6 over 20m)   kubelet            Error: ImagePullBackOff
  Normal   BackOff    19s (x86 over 20m)  kubelet            Back-off pulling image "registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0"

@askb
Copy link
Author

askb commented Dec 21, 2021

// Verify docker pull from the container

> docker pull registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0
Error response from daemon: manifest for registry.public.yul1.vexxhost.net/magnum/openstack-cloud-controller-manager:v.1.18.0 not found: manifest unknown: manifest unknown

// Edit the incorrect version of v.1.8.1 to v1.8.1

> kubectl edit ds openstack-cloud-controller-manager -n kube-system

@askb
Copy link
Author

askb commented Dec 21, 2021

// after fixing this we can see that the pod gets rebooted:

[jenkins@snd-centos7-helm-4c-4g-276 ~]> kubectl get pods -n kube-system
NAME                                         READY   STATUS              RESTARTS   AGE
calico-kube-controllers-7b67cb9dd4-p2zmv     0/1     Pending             0          22m
calico-node-2nsq4                            1/1     Running             0          20m
calico-node-72zd5                            1/1     Running             0          22m
calico-node-89xx6                            1/1     Running             0          20m
coredns-57995474d5-7bsnr                     1/1     Running             0          22m
coredns-57995474d5-p9px9                     1/1     Running             0          22m
csi-cinder-controllerplugin-0                5/5     Running             0          22m
dashboard-metrics-scraper-7674b9d54f-5mk8h   0/1     Pending             0          22m
k8s-keystone-auth-jvtrg                      1/1     Running             0          22m
kube-dns-autoscaler-7967dcdbd7-xwvhv         0/1     Pending             0          22m
kubernetes-dashboard-bfc6ccfdf-ndq7n         0/1     Pending             0          22m
openstack-cloud-controller-manager-pcrhm     0/1     ContainerCreating   0          2s

[jenkins@snd-centos7-helm-4c-4g-276 ~]> kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7b67cb9dd4-p2zmv     1/1     Running   0          24m
calico-node-2nsq4                            1/1     Running   0          21m
calico-node-72zd5                            1/1     Running   0          24m
calico-node-89xx6                            1/1     Running   0          21m
coredns-57995474d5-7bsnr                     1/1     Running   0          24m
coredns-57995474d5-p9px9                     1/1     Running   0          24m
csi-cinder-controllerplugin-0                5/5     Running   0          24m
csi-cinder-nodeplugin-kftm2                  2/2     Running   0          97s
csi-cinder-nodeplugin-lw67x                  2/2     Running   0          99s
dashboard-metrics-scraper-7674b9d54f-5mk8h   1/1     Running   0          24m
k8s-keystone-auth-jvtrg                      1/1     Running   0          24m
kube-dns-autoscaler-7967dcdbd7-xwvhv         1/1     Running   0          24m
kubernetes-dashboard-bfc6ccfdf-ndq7n         1/1     Running   0          24m
npd-2mxqn                                    1/1     Running   0          99s
npd-668qs                                    1/1     Running   0          97s
openstack-cloud-controller-manager-pcrhm     1/1     Running   0          107s

> kubectl get pods -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
default       sdnc-opendaylight-0                          1/1     Running   0          18m
kube-system   calico-kube-controllers-7b67cb9dd4-p2zmv     1/1     Running   0          24m
kube-system   calico-node-2nsq4                            1/1     Running   0          22m
kube-system   calico-node-72zd5                            1/1     Running   0          24m
kube-system   calico-node-89xx6                            1/1     Running   0          22m
kube-system   coredns-57995474d5-7bsnr                     1/1     Running   0          24m
kube-system   coredns-57995474d5-p9px9                     1/1     Running   0          24m
kube-system   csi-cinder-controllerplugin-0                5/5     Running   0          24m
kube-system   csi-cinder-nodeplugin-kftm2                  2/2     Running   0          118s
kube-system   csi-cinder-nodeplugin-lw67x                  2/2     Running   0          2m
kube-system   dashboard-metrics-scraper-7674b9d54f-5mk8h   1/1     Running   0          24m
kube-system   k8s-keystone-auth-jvtrg                      1/1     Running   0          24m
kube-system   kube-dns-autoscaler-7967dcdbd7-xwvhv         1/1     Running   0          24m
kube-system   kubernetes-dashboard-bfc6ccfdf-ndq7n         1/1     Running   0          24m
kube-system   npd-2mxqn                                    1/1     Running   0          2m
kube-system   npd-668qs                                    1/1     Running   0          118s
kube-system   openstack-cloud-controller-manager-pcrhm     1/1     Running   0          2m8s


@askb
Copy link
Author

askb commented Dec 21, 2021

// start the sdnc with port forward:

> kubectl get svc
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.254.0.1   <none>        443/TCP    34m
sdnc-opendaylight   ClusterIP   None         <none>        8181/TCP   27m

[jenkins@snd-centos7-helm-4c-4g-276 ~]> export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=opendaylight,app.kubernetes.io/instance=sdnc" -o jsonpath="{.items[0].metadata.name}")
[jenkins@snd-centos7-helm-4c-4g-276 ~]> export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
[jenkins@snd-centos7-helm-4c-4g-276 ~]> kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Forwarding from 127.0.0.1:8080 -> 8181
Forwarding from [::1]:8080 -> 8181
Handling connection for 8080


@askb
Copy link
Author

askb commented Dec 21, 2021

// test the setup to connect to the SDNC and list the restconf modules

> curl -p -u admin:admin --request GET 'http://127.0.0.1:8080/restconf/modules' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4259  100  4259    0     0   166k      0 --:--:-- --:--:-- --:--:--  173k
{
  "modules": {
    "module": [
      {
        "name": "prefix-shard-configuration",
        "revision": "2017-01-10",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:md:sal:clustering:prefix-shard-configuration"
      },
      {
        "name": "odl-general-entity",
        "revision": "2015-09-30",
        "namespace": "urn:opendaylight:params:xml:ns:yang:mdsal:core:general-entity"
      },
      {
        "name": "aaa-password-service-config",
        "revision": "2017-06-19",
        "namespace": "urn:opendaylight:aaa:password:service:config"
      },
      {
        "name": "cluster-admin",
        "revision": "2015-10-13",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:md:sal:cluster:admin"
      },
      {
        "name": "ietf-restconf",
        "revision": "2017-01-26",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-restconf"
      },
      {
        "name": "ietf-netconf",
        "revision": "2011-06-01",
        "namespace": "urn:ietf:params:xml:ns:netconf:base:1.0",
        "feature": [
          "confirmed-commit",
          "startup",
          "rollback-on-error",
          "validate",
          "url",
          "writable-running",
          "xpath",
          "candidate"
        ]
      },
      {
        "name": "distributed-datastore-provider",
        "revision": "2014-06-12",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:config:distributed-datastore-provider"
      },
      {
        "name": "nc-notifications",
        "revision": "2008-07-14",
        "namespace": "urn:ietf:params:xml:ns:netmod:notification"
      },
      {
        "name": "aaa-cert",
        "revision": "2015-11-26",
        "namespace": "urn:opendaylight:yang:aaa:cert"
      },
      {
        "name": "sal-remote-augment",
        "revision": "2014-07-08",
        "namespace": "urn:sal:restconf:event:subscription"
      },
      {
        "name": "ietf-netconf-monitoring",
        "revision": "2010-10-04",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring"
      },
      {
        "name": "ietf-netconf-nmda",
        "revision": "2019-01-07",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-netconf-nmda",
        "feature": [
          "origin",
          "with-defaults"
        ]
      },
      {
        "name": "ietf-yang-metadata",
        "revision": "2016-08-05",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-yang-metadata"
      },
      {
        "name": "ietf-netconf-with-defaults",
        "revision": "2011-06-01",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-netconf-with-defaults"
      },
      {
        "name": "ietf-inet-types",
        "revision": "2013-07-15",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-inet-types"
      },
      {
        "name": "ietf-yang-library",
        "revision": "2019-01-04",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-yang-library"
      },
      {
        "name": "yang-ext",
        "revision": "2013-07-09",
        "namespace": "urn:opendaylight:yang:extension:yang-ext"
      },
      {
        "name": "ietf-yang-types",
        "revision": "2013-07-15",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-yang-types"
      },
      {
        "name": "aaa-encrypt-service-config",
        "revision": "2016-09-15",
        "namespace": "config:aaa:authn:encrypt:service:config"
      },
      {
        "name": "subscribe-to-notification",
        "revision": "2016-10-28",
        "namespace": "subscribe:to:notification"
      },
      {
        "name": "ietf-netconf-notifications",
        "revision": "2012-02-06",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-netconf-notifications"
      },
      {
        "name": "aaa-app-config",
        "revision": "2017-06-19",
        "namespace": "urn:opendaylight:aaa:app:config"
      },
      {
        "name": "odl-controller-cds-types",
        "revision": "2019-10-24",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:cds:types"
      },
      {
        "name": "aaa",
        "revision": "2016-12-14",
        "namespace": "urn:opendaylight:params:xml:ns:yang:aaa"
      },
      {
        "name": "notifications",
        "revision": "2008-07-14",
        "namespace": "urn:ietf:params:xml:ns:netconf:notification:1.0"
      },
      {
        "name": "aaa-cert-mdsal",
        "revision": "2016-03-21",
        "namespace": "urn:opendaylight:yang:aaa:cert:mdsal"
      },
      {
        "name": "aaa-cert-rpc",
        "revision": "2015-12-15",
        "namespace": "urn:opendaylight:yang:aaa:cert:rpc"
      },
      {
        "name": "ietf-netconf-monitoring-extension",
        "revision": "2013-12-10",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring-extension"
      },
      {
        "name": "ietf-origin",
        "revision": "2018-02-14",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-origin"
      },
      {
        "name": "ietf-restconf",
        "revision": "2013-10-19",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-restconf"
      },
      {
        "name": "instance-identifier-patch-module",
        "revision": "2015-11-21",
        "namespace": "instance:identifier:patch:module"
      },
      {
        "name": "sal-remote",
        "revision": "2014-01-14",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:md:sal:remote"
      },
      {
        "name": "ietf-datastores",
        "revision": "2018-02-14",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-datastores"
      },
      {
        "name": "entity-owners",
        "revision": "2015-08-04",
        "namespace": "urn:opendaylight:params:xml:ns:yang:controller:md:sal:clustering:entity-owners"
      },
      {
        "name": "ietf-restconf-monitoring",
        "revision": "2017-01-26",
        "namespace": "urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"
      }
    ]
  }
}

@askb
Copy link
Author

askb commented Dec 23, 2021

kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
# Get all worker nodes
kubectl get node --selector='!node-role.kubernetes.io/master'
# Get all namespaces
kubectl get pods --all-namespaces
# Get all events by timestamps
kubectl get events --sort-by=.metadata.creationTimestamp
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
>  && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# kubectl get nodes -o jsonpath='{range .items[*]} {.metadata.name} {" "} {.status.conditions[?(@.type=="Ready")].status} {" "} {.spec.taints} {"\n"} {end}'
#  sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-master-0   True   [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]
#   sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-node-0   True   [{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]
#   sandbox-packaging-k8s-odl-depl-laucrtjp4kdp-node-1   True   [{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"}]

# Wait for pod to get ready before testing
#kubectl wait -n ${POD_NAME} --for=condition=ready pod --all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment