Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save jspaleta/6f3f5bdadc83790be978999ff9693082 to your computer and use it in GitHub Desktop.
Save jspaleta/6f3f5bdadc83790be978999ff9693082 to your computer and use it in GitHub Desktop.
Cilium 1.4.1 CoreDNS On restart Disfunction Example Using Kind
[jspaleta@msi ~]$ cat kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
[jspaleta@msi ~]$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.27.3) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-h89hm 1/1 Running 0 2m26s
kube-system coredns-5d78c9869d-mcj9r 1/1 Running 0 2m26s
kube-system etcd-kind-control-plane 1/1 Running 0 2m36s
kube-system kindnet-7bg2n 1/1 Running 0 2m26s
kube-system kindnet-bdzn5 1/1 Running 0 2m17s
kube-system kindnet-f57gg 1/1 Running 0 2m18s
kube-system kindnet-grf4q 1/1 Running 0 2m17s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 2m36s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 2m36s
kube-system kube-proxy-dbg8t 1/1 Running 0 2m17s
kube-system kube-proxy-hk5v9 1/1 Running 0 2m17s
kube-system kube-proxy-lnvzb 1/1 Running 0 2m26s
kube-system kube-proxy-wlqh8 1/1 Running 0 2m18s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 2m36s
local-path-storage local-path-provisioner-6bc4bddd6b-q2g8l 1/1 Running 0 2m26s
[jspaleta@msi ~]$ docker restart kind-control-plane kind-worker kind-worker2 kind-worker3
kind-control-plane
kind-worker
kind-worker2
kind-worker3
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-h89hm 1/1 Running 1 (69s ago) 4m7s
kube-system coredns-5d78c9869d-mcj9r 1/1 Running 1 (69s ago) 4m7s
kube-system etcd-kind-control-plane 1/1 Running 1 (69s ago) 4m17s
kube-system kindnet-7bg2n 1/1 Running 1 (69s ago) 4m7s
kube-system kindnet-bdzn5 1/1 Running 1 (67s ago) 3m58s
kube-system kindnet-f57gg 1/1 Running 1 (65s ago) 3m59s
kube-system kindnet-grf4q 1/1 Running 1 (66s ago) 3m58s
kube-system kube-apiserver-kind-control-plane 1/1 Running 1 (69s ago) 4m17s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 1 (69s ago) 4m17s
kube-system kube-proxy-dbg8t 1/1 Running 1 (67s ago) 3m58s
kube-system kube-proxy-hk5v9 1/1 Running 1 (66s ago) 3m58s
kube-system kube-proxy-lnvzb 1/1 Running 1 (69s ago) 4m7s
kube-system kube-proxy-wlqh8 1/1 Running 1 (65s ago) 3m59s
kube-system kube-scheduler-kind-control-plane 1/1 Running 1 (69s ago) 4m17s
local-path-storage local-path-provisioner-6bc4bddd6b-q2g8l 1/1 Running 2 (20s ago) 4m7s
[jspaleta@msi ~]$ echo "CoreDNS pods recover after kind node container restarts using k8s 1.27.3"
[jspaleta@msi ~]$ cat kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
networking:
disableDefaultCNI: true
kubeProxyMode: none
[jspaleta@msi ~]$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.27.3) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
[jspaleta@msi ~]$ cilium install
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.20.0"
ℹ️ Using Cilium version 1.14.1
🔮 Auto-detected cluster name: kind-kind
ℹ️ Detecting real Kubernetes API server addr and port on Kind
🔮 Auto-detected kube-proxy has not been installed
ℹ️ Cilium will fully replace all functionalities of kube-proxy
[jspaleta@msi ~]$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4
Containers: cilium Running: 4
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.14.1
Image versions cilium quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72: 4
cilium-operator quay.io/cilium/operator-generic:v1.14.1@sha256:e061de0a930534c7e3f8feda8330976367971238ccafff42659f104effd4b5f7: 1
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-bxzlj 1/1 Running 0 3m10s
kube-system cilium-d5q4w 1/1 Running 0 3m10s
kube-system cilium-nrplx 1/1 Running 0 3m10s
kube-system cilium-operator-58c75d7894-mtdnz 1/1 Running 0 3m11s
kube-system cilium-zdjlb 1/1 Running 0 3m10s
kube-system coredns-5d78c9869d-tml2j 1/1 Running 0 5m30s
kube-system coredns-5d78c9869d-vcvkc 1/1 Running 0 5m30s
kube-system etcd-kind-control-plane 1/1 Running 0 5m40s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 5m40s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 5m40s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 5m40s
local-path-storage local-path-provisioner-6bc4bddd6b-jnszr 1/1 Running 0 5m30s
[jspaleta@msi ~]$ docker restart kind-control-plane kind-worker kind-worker2 kind-worker3
kind-control-plane
kind-worker
kind-worker2
kind-worker3
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-bxzlj 1/1 Running 1 (62s ago) 5m6s
kube-system cilium-d5q4w 1/1 Running 1 (61s ago) 5m6s
kube-system cilium-nrplx 1/1 Running 1 (63s ago) 5m6s
kube-system cilium-operator-58c75d7894-mtdnz 1/1 Running 2 (50s ago) 5m7s
kube-system cilium-zdjlb 1/1 Running 1 (50s ago) 5m6s
kube-system coredns-5d78c9869d-tml2j 0/1 Unknown 0 7m26s
kube-system coredns-5d78c9869d-vcvkc 0/1 Unknown 0 7m26s
kube-system etcd-kind-control-plane 1/1 Running 1 (63s ago) 7m36s
kube-system kube-apiserver-kind-control-plane 1/1 Running 1 (63s ago) 7m36s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 1 (63s ago) 7m36s
kube-system kube-scheduler-kind-control-plane 1/1 Running 1 (63s ago) 7m36s
local-path-storage local-path-provisioner-6bc4bddd6b-jnszr 0/1 Unknown 0 7m26s
[jspaleta@msi ~]$ echo "CoreDNS pods do not recover after kind node container restarts using k8s 1.27.3"
[jspaleta@msi ~]$ cat kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
networking:
disableDefaultCNI: true
kubeProxyMode: none
[jspaleta@msi ~]$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.28.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
[jspaleta@msi ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane NotReady control-plane 6m15s v1.28.0
kind-worker NotReady <none> 5m54s v1.28.0
kind-worker2 NotReady <none> 5m54s v1.28.0
kind-worker3 NotReady <none> 5m58s v1.28.0
[jspaleta@msi ~]$ cilium install --version v1.13.6
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.20.0"
ℹ️ Using Cilium version 1.13.6
🔮 Auto-detected cluster name: kind-kind
ℹ️ Detecting real Kubernetes API server addr and port on Kind
🔮 Auto-detected kube-proxy has not been installed
ℹ️ Cilium will fully replace all functionalities of kube-proxy
[jspaleta@msi ~]$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4
Containers: cilium Running: 4
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.13.6
Image versions cilium quay.io/cilium/cilium:v1.13.6@sha256:994b8b3b26d8a1ef74b51a163daa1ac02aceb9b16f794f8120f15a12011739dc: 4
cilium-operator quay.io/cilium/operator-generic:v1.13.6@sha256:753c1d0549032da83ec45333feec6f4b283331618a1f7fed2f7e2d36efbd4bc9: 1
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-hj89p 1/1 Running 0 6m35s
kube-system cilium-j66bq 1/1 Running 0 6m35s
kube-system cilium-operator-5ddcc4b8f-mkqsw 1/1 Running 0 6m35s
kube-system cilium-ppm6p 1/1 Running 0 6m35s
kube-system cilium-sck8p 1/1 Running 0 6m35s
kube-system coredns-5dd5756b68-b6q22 1/1 Running 0 13m
kube-system coredns-5dd5756b68-mdjt9 1/1 Running 0 13m
kube-system etcd-kind-control-plane 1/1 Running 0 13m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 13m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 13m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 13m
local-path-storage local-path-provisioner-6f8956fb48-2wh2n 1/1 Running 0 13m
[jspaleta@msi ~]$ docker restart kind-control-plane kind-worker kind-worker2 kind-worker3
kind-control-plane
kind-worker
kind-worker2
kind-worker3
[jspaleta@msi ~]$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4
Containers: cilium Running: 4
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.13.6
Image versions cilium quay.io/cilium/cilium:v1.13.6@sha256:994b8b3b26d8a1ef74b51a163daa1ac02aceb9b16f794f8120f15a12011739dc: 4
cilium-operator quay.io/cilium/operator-generic:v1.13.6@sha256:753c1d0549032da83ec45333feec6f4b283331618a1f7fed2f7e2d36efbd4bc9: 1
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-hj89p 1/1 Running 1 (7m56s ago) 15m
kube-system cilium-j66bq 1/1 Running 1 (7m58s ago) 15m
kube-system cilium-operator-5ddcc4b8f-mkqsw 1/1 Running 1 (7m58s ago) 15m
kube-system cilium-ppm6p 1/1 Running 1 (8m8s ago) 15m
kube-system cilium-sck8p 1/1 Running 1 (7m43s ago) 15m
kube-system coredns-5dd5756b68-b6q22 0/1 Running 1 (7m43s ago) 22m
kube-system coredns-5dd5756b68-mdjt9 0/1 Running 1 (7m44s ago) 22m
kube-system etcd-kind-control-plane 1/1 Running 1 (8m8s ago) 22m
kube-system kube-apiserver-kind-control-plane 1/1 Running 1 (8m8s ago) 22m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 1 (8m8s ago) 22m
kube-system kube-scheduler-kind-control-plane 1/1 Running 1 (8m8s ago) 22m
local-path-storage local-path-provisioner-6f8956fb48-2wh2n 0/1 CrashLoopBackOff 5 (22s ago) 22m
[jspaleta@msi ~]$ echo "CoreDNS pods do not recover after kind node container restarts using k8s 1.28.0"
[jspaleta@msi ~]$ kubectl logs coredns-5dd5756b68-b6q22 -n kube-system
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
[jspaleta@msi ~]$ echo "The local-path-provisioner also has a problem but I can kick that and get it unstuck"
[jspaleta@msi ~]$ kubectl logs local-path-provisioner-6f8956fb48-2wh2n -n local-path-storage
time="2023-09-07T19:53:24Z" level=fatal msg="Error starting daemon: invalid empty flag helper-pod-file and it also does not exist at ConfigMap local-path-storage/local-path-config with err: Get \"https://10.96.0.1:443/api/v1/namespaces/local-path-storage/configmaps/local-path-config\": dial tcp 10.96.0.1:443: i/o timeout"
[jspaleta@msi ~]$ cat kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
- role: worker
image: kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31
networking:
disableDefaultCNI: true
kubeProxyMode: none
[jspaleta@msi ~]$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.28.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
[jspaleta@msi ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane NotReady control-plane 4m53s v1.28.0
kind-worker NotReady <none> 4m29s v1.28.0
kind-worker2 NotReady <none> 4m30s v1.28.0
kind-worker3 NotReady <none> 4m27s v1.28.0
[jspaleta@msi ~]$ cilium install
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.20.0"
ℹ️ Using Cilium version 1.14.1
🔮 Auto-detected cluster name: kind-kind
ℹ️ Detecting real Kubernetes API server addr and port on Kind
🔮 Auto-detected kube-proxy has not been installed
ℹ️ Cilium will fully replace all functionalities of kube-proxy
[jspaleta@msi ~]$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 4
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.14.1
Image versions cilium quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72: 4
cilium-operator quay.io/cilium/operator-generic:v1.14.1@sha256:e061de0a930534c7e3f8feda8330976367971238ccafff42659f104effd4b5f7: 1
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-cgkp7 1/1 Running 0 3m56s
kube-system cilium-operator-7d684fb764-wt8gk 1/1 Running 0 3m57s
kube-system cilium-psb4s 1/1 Running 0 3m57s
kube-system cilium-wrngf 1/1 Running 0 3m56s
kube-system cilium-zm6lv 1/1 Running 0 3m56s
kube-system coredns-5dd5756b68-nrhbc 1/1 Running 0 8m47s
kube-system coredns-5dd5756b68-q2lsg 1/1 Running 0 8m47s
kube-system etcd-kind-control-plane 1/1 Running 0 9m4s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 9m1s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 9m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 9m
local-path-storage local-path-provisioner-6f8956fb48-t4xwd 1/1 Running 0 8m47s
[jspaleta@msi ~]$ docker restart kind-control-plane kind-worker kind-worker2 kind-worker3
kind-control-plane
kind-worker
kind-worker2
kind-worker3
[jspaleta@msi ~]$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4
Containers: cilium Running: 4
cilium-operator Running: 1
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.14.1
Image versions cilium quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72: 4
cilium-operator quay.io/cilium/operator-generic:v1.14.1@sha256:e061de0a930534c7e3f8feda8330976367971238ccafff42659f104effd4b5f7: 1
[jspaleta@msi ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-cgkp7 1/1 Running 1 (69s ago) 6m14s
kube-system cilium-operator-7d684fb764-wt8gk 1/1 Running 2 (70s ago) 6m15s
kube-system cilium-psb4s 1/1 Running 1 (87s ago) 6m15s
kube-system cilium-wrngf 1/1 Running 1 (70s ago) 6m14s
kube-system cilium-zm6lv 1/1 Running 1 (76s ago) 6m14s
kube-system coredns-5dd5756b68-nrhbc 0/1 Running 1 (76s ago) 11m
kube-system coredns-5dd5756b68-q2lsg 0/1 Running 1 (76s ago) 11m
kube-system etcd-kind-control-plane 1/1 Running 1 (87s ago) 11m
kube-system kube-apiserver-kind-control-plane 1/1 Running 1 (87s ago) 11m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 1 (87s ago) 11m
kube-system kube-scheduler-kind-control-plane 1/1 Running 1 (87s ago) 11m
local-path-storage local-path-provisioner-6f8956fb48-t4xwd 1/1 Running 1 (76s ago) 11m
[jspaleta@msi ~]$ echo "CoreDNS pods do not recover after kind node container restarts using k8s 1.28.0"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment