-
-
Save zonca/4c050f7ba7727d047a765c68e718dbb0 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Feb 09 02:29:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:19.823087 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:29:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:19.340121 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="cinder-csi-plugin" probeResult=failure output="HTTP probe failed with statuscode: 500" | |
Feb 09 02:29:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:18.656509 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:29:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:18.656279 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:15.668428 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667998 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667992 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667986 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667981 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667974 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:15.667958 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:29:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:15.314947 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:29:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:14.667900 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:29:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:14.667765 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:29:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:14.641183 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:29:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:14.640674 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.642115 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.641906 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.641683 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.641476 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.641283 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.641097 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.640867 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:13.640601 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:13.173075 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:29:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:12.821791 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.668081 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:10.667925 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.553893 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.553876 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.553648 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.553411 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.553193 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:10.552915 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:05.821298 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:29:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:05.663442 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:29:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:05.663239 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:29:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:05.305441 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:04.644390 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643870 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643865 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643859 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643854 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643847 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:29:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:04.643829 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:03.667952 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.667773 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.642426 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.642221 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.642018 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.641790 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.641522 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.641262 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.641037 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.640620 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.609660 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:03.251124 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:03.250831 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:29:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:03.172121 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:29:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:02.249473 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:29:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:02.249397 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:02.249150 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:29:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:02.248986 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:7d02e4fc23e07d65dc4b3c032f01eede1d428c9f80819c36c5482d221dbfca79} | |
Feb 09 02:29:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:02.248960 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:72c07675979b2b7e6b491bf2a0fa51bad9e1a2014ae4deb05d23ac4b66839c1c} | |
Feb 09 02:29:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:01.771910 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:29:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:01.640805 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:29:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:01.640797 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:29:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:29:01.640777 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.267460 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.267445 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.267272 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.267064 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.266831 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:29:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:29:00.266603 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:59.667840 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:28:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:59.667705 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:28:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:59.640851 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:28:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:59.640674 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:28:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:58.820248 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:55.296274 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.642012 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.641849 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.641696 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.641515 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.641333 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.641120 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.640725 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:53.640342 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:53.171364 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:52.640572 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:28:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:52.640385 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:28:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:51.819009 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:50.655811 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:28:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:50.655602 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.935059 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.935044 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.934850 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.934612 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.934336 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.933919 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:49.641387 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640843 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640834 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640826 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640817 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640805 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:28:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:49.640779 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:28:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:48.641069 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:28:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:48.640905 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:28:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:46.671656 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:28:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:46.671251 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:28:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:46.671239 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:28:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:46.671210 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:28:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:45.641485 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:28:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:45.641190 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:28:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:45.288898 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:44.818394 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.642621 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.642456 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.642294 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.642122 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.641948 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.641756 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.641482 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:43.641229 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:43.170942 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.689507 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.689492 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.689306 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.689123 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.688914 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:39.688668 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:37.816778 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:37.641027 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:28:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:37.640815 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:35.663661 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.663485 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:35.641543 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.641020 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.641014 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.641007 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.641000 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.640992 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:35.640971 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:28:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:35.281671 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:34.641350 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:28:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:34.641048 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:28:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:34.641041 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:28:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:34.641025 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:33.671254 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.671095 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.643899 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.643281 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.643127 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.642959 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.642788 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.642570 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.641492 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:33.641087 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:33.169563 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:32.640681 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:28:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:32.640513 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:28:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:30.815400 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.414067 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.414051 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.413842 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.413593 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.413178 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:29.412851 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:25.273949 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:23.814494 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:23.671767 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.671591 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.642585 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.642389 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.642186 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641989 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:23.641799 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641783 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641588 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641327 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641327 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641322 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641317 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641312 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641305 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641293 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:23.641132 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:23.168281 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:22.641312 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:28:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:22.640938 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:28:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:22.640925 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:28:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:22.640893 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:21.664255 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:21.664121 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:21.641004 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:21.640842 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:21.640838 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:28:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:21.640626 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.373313 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.373297 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.373103 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.372901 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.372687 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:19.372438 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:16.813474 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:15.266584 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.642360 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.642185 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.642001 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.641832 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.641651 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.641416 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.641182 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:13.640923 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:13.167242 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:12.663715 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:28:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:12.663536 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:11.642789 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641988 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641981 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641974 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641968 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641960 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.641939 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.172574 13938 status_manager.go:667] "Failed to get status for pod" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f pod="kube-system/snapshot-controller-7d445c66c9-v9z66" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/snapshot-controller-7d445c66c9-v9z66\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:11.172076 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerStarted Data:41f0abb3fa18be741de9cfb410b43019df22d12b4d50d992ae7654e34509c305} | |
Feb 09 02:28:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:10.679050 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:28:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:10.663450 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:28:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:10.663265 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.812613 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.679908 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:09.679744 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.678815 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:09.678588 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.284293 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.284282 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.284039 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.283803 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.283505 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:09.283242 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:07.643453 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:28:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:07.643137 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:28:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:07.643131 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:28:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:07.643114 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:28:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:05.258939 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.641798 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.641637 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.641330 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.641038 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.640628 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.640348 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.640142 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:03.609339 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:28:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:03.166108 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:28:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:02.812188 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:00.663635 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.663467 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:28:00.584471 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584048 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584042 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584037 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584032 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584025 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.584010 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:28:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:28:00.566900 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.018123 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.018103 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.017896 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.017737 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.017517 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:59.017282 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:58.663394 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.663213 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:58.641317 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640832 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640825 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640819 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640812 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640805 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:27:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:58.640786 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:27:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:57.663010 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nodelocaldns-kn4r6.17b20f9ccc783977", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3651", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nodelocaldns-kn4r6", UID:"c62fa1bd-8aeb-471c-8b6e-3e6847025b23", APIVersion:"v1", ResourceVersion:"828", FieldPath:"spec.containers{node-cache}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 14, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 10, 279229050, time.Local), Count:74, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nodelocaldns-kn4r6.17b20f9ccc783977": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:57.662755 13938 event.go:221] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b210b2fd23fb4b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}' (retry limit exceeded!) | |
Feb 09 02:27:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:57.662699 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:57.640514 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:27:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:57.640382 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:27:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:56.659990 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:27:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:56.659851 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:27:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:55.810934 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:55.656218 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:27:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:55.656069 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:27:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:55.252177 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:53.672065 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.671740 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.671734 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.671716 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.642268 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.642092 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.641916 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.641744 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.641518 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.641273 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:53.640954 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:50.139489 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:50.139079 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerStarted Data:db54a9a6f70c897d11eff1b97e437829783ed1ab61889eb60a8ed3ad2c29b325} | |
Feb 09 02:27:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:49.640784 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992913 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992904 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992777 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992649 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992515 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.992372 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:48.809790 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:47.667349 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.667188 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:47.661910 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:47.641768 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641162 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641153 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641144 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641135 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641125 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:27:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:47.641100 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:27:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:45.640279 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:27:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:45.640164 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:27:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:45.244547 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:44.671780 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:27:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:44.671638 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:43.680259 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.680085 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:43.679042 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.678673 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.642522 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.642335 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.642166 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.641965 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.641702 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:43.641254 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:41.808793 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.934390 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.934371 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.934172 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.933989 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.933803 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.933394 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:38.641001 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:38.640700 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:38.640693 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:27:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:38.640677 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:27:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:37.660990 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:35.664398 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663939 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663933 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663928 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663923 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663917 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:35.663900 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:27:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:35.238155 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:34.808246 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:34.641308 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:27:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:34.641118 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:33.671332 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.671155 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.642122 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.641941 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.641740 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.641441 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.641096 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:33.640738 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:32.664192 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:27:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:32.664058 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:27:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:32.641317 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:27:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:32.641153 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:27:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:31.675384 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:27:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:31.675197 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:27:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:30.641051 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:27:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:30.640927 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.815348 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.815329 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.815144 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.814974 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.814744 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:28.814459 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:27.807203 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:27.660109 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:25.232378 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:24.640775 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.640465 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.640459 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.640442 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:24.099332 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098897 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098892 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098886 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098881 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098873 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:27:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:24.098853 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.642143 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.641920 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.641700 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.641455 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.641195 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.640915 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:23.124734 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124283 13938 scope.go:115] "RemoveContainer" containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124277 13938 scope.go:115] "RemoveContainer" containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124272 13938 scope.go:115] "RemoveContainer" containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124266 13938 scope.go:115] "RemoveContainer" containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124259 13938 scope.go:115] "RemoveContainer" containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.124240 13938 scope.go:115] "RemoveContainer" containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.096836 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:23.096440 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:6d92288eaf55eadf0d754818e3b88a1603428f688c3d7e4e9eeda77a5f8fb3fe} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:22.664525 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.664327 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:22.219938 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.154932 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.143359 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.129544 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.113908 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.104816 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.093048 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092776 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092664 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092645 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="dc393a39999347962a6f302821aff146e801bb98498b122733ecb0d753f5425d" | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092637 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:dc393a39999347962a6f302821aff146e801bb98498b122733ecb0d753f5425d} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092629 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092619 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092610 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092600 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092587 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092557 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235} | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092540 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" exitCode=2 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092532 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" exitCode=2 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092524 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" exitCode=2 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092515 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" exitCode=2 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092507 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" exitCode=1 | |
Feb 09 02:27:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:22.092488 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" exitCode=2 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:21.675581 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.675412 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640867 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-provisioner" containerID="containerd://555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640842 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-snapshotter" containerID="containerd://18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:21.640834 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640819 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-resizer" containerID="containerd://cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640793 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="liveness-probe" containerID="containerd://c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640763 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" containerID="containerd://cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640696 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-attacher" containerID="containerd://c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028" gracePeriod=30 | |
Feb 09 02:27:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:21.640646 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:27:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:20.805785 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.679661 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:18.679491 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.678516 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:18.678312 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.488262 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.488245 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.488037 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.487837 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.487587 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:18.487274 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:17.664079 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:27:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:17.663954 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:27:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:17.659265 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:15.226211 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:13.804931 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.642589 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.642398 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.642205 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.641959 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.641721 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:13.641388 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:11.672192 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:27:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:11.672024 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:27:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:10.664216 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:27:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:10.663899 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:27:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:10.663892 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:27:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:10.663874 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.407185 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.407170 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.407003 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.406753 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.406456 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:08.406052 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:07.671441 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:27:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:07.671263 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:27:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:07.658445 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:27:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:07.641429 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:27:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:07.641199 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:27:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:06.804509 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:27:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:05.656313 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:27:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:05.655938 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:27:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:05.215564 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:27:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:04.643178 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:27:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:04.643032 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:27:03.659495 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.659298 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.641424 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.641232 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.641049 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.640858 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.640684 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.640498 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:27:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:03.608390 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:27:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:27:01.567380 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" probeResult=failure output="HTTP probe failed with statuscode: 500" | |
Feb 09 02:26:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:59.803716 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.226073 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.226062 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.225888 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.225738 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.225466 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.225084 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:58.080228 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:58.079906 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:58.079899 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:58.079879 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:57.667595 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.667432 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:57.657486 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:57.052568 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.052495 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.052267 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.052261 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.052247 13938 scope.go:115] "RemoveContainer" containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" | |
Feb 09 02:26:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:57.052078 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:eef917cb7e822e66b7a3b6558c87a7eb5b4923237b64f6578ceb7b5f3f053b6f} | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:56.107673 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.049369 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.049097 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.048943 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.048932 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a700b0e757c6dea93d3c8fc7dd28a7d283fabf1e34cecec69361c042ecfb9a42" | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.048923 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:a700b0e757c6dea93d3c8fc7dd28a7d283fabf1e34cecec69361c042ecfb9a42} | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.048904 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729} | |
Feb 09 02:26:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:56.048870 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" exitCode=2 | |
Feb 09 02:26:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:55.824958 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:55.824707 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:55.824689 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:55.640903 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="node-driver-registrar" containerID="containerd://c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729" gracePeriod=30 | |
Feb 09 02:26:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:55.204584 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:54.663564 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:26:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:54.663348 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.641944 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.641639 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.641413 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.641216 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.641013 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:53.640720 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:52.802966 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:52.679420 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:26:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:52.679282 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:26:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:52.664311 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:26:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:52.664191 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:26:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:51.641263 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:26:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:51.641090 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:26:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:48.664400 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:48.664210 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.926924 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.926909 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.926735 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.926531 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.926278 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.925832 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:47.656357 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:45.801743 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:45.192172 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:45.028965 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:45.028721 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:45.028514 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:45.028505 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:45.028261 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:c2accfd0e38d45f10704b45ef1032f12b0ba3903c91b5365bede90e5cff82729} | |
Feb 09 02:26:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:44.731567 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:44.663912 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:44.663906 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:44.663886 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:26:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:43.642138 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:43.641877 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:43.641466 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:43.641035 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:43.640555 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:42.665980 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:26:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:42.665807 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:26:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:42.664798 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:26:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:42.664626 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:26:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:40.640956 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:26:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:40.640744 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:26:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:39.663719 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:26:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:39.663587 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:26:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:39.640969 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:26:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:39.640822 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:26:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:38.800609 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.667672 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:37.667385 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.655742 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.532496 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.532481 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.532216 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.531916 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.531567 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:37.531271 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:35.178814 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:33.642276 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:33.641675 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:33.641483 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:33.641321 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:33.641063 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:31.799961 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:30.656678 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:30.656293 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:30.656287 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:30.656269 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:30.640600 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:26:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:30.640418 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:26:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:28.691670 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:26:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:28.691336 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:26:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:28.664131 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:26:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:28.663903 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.654839 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.154322 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.154309 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.154031 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.153810 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.153271 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:27.152693 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:25.640476 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:26:25 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:25.640280 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:26:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:25.165239 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:24.799003 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:24.663877 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:26:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:24.663716 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:26:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:23.641824 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:23.641469 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:23.641179 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:23.640893 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:23.640642 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:22.656595 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:22.656378 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:19.664479 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:19.664157 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:19.664151 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:19.664133 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.797934 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.654174 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.082505 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.082489 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.082269 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.082098 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.081851 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:17.081512 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:15.152449 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:14.691476 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:26:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:14.691283 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:26:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:14.671565 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:26:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:14.671416 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:13.671786 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.671654 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.641902 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.641690 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.641391 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.641176 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:13.641028 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.640912 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:13.640812 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:26:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:10.797235 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:10.279257 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:10.279061 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:10.252367 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:26:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:09.664426 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-hq9nf.17b20f9c59647467", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"3653", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-hq9nf", UID:"58d10739-95b9-4783-bef2-7f61e6690f70", APIVersion:"v1", ResourceVersion:"511", FieldPath:"spec.containers{kube-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 12, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 26, 9, 664056139, time.Local), Count:64, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/kube-proxy-hq9nf.17b20f9c59647467": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:26:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:09.664084 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:26:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:09.663940 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:26:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:08.331658 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:08.331465 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:08.306820 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:26:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:07.643860 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:26:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:07.643532 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:26:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:07.643526 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:26:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:07.643511 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.886405 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.886384 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.885807 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.885565 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.885340 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:06.885139 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:05.252830 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerName="node-cache" probeResult=failure output="Get \"http://169.254.25.10:9254/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
Feb 09 02:26:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:05.141041 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:03.960414 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.960237 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:03.796700 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.642483 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.642293 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.641921 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.641621 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:03.641497 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.641226 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.641213 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:26:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:03.607486 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:02.958996 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:02.958975 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:02.958790 13938 scope.go:115] "RemoveContainer" containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:02.958605 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerStarted Data:e174e85810644097bc3e2d1ffb20dc8df14f50e7b6a1fb2c7ab47b9242177922} | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:02.640783 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:02.640610 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:26:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:02.036814 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956545 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956284 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956186 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956175 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b8c73ad85c95e99229cced82750e052f3a394ee171eb3dc5e0213939869af7f5" | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956167 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:b8c73ad85c95e99229cced82750e052f3a394ee171eb3dc5e0213939869af7f5} | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956147 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9} | |
Feb 09 02:26:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:01.956120 13938 generic.go:296] "Generic (PLEG): container finished" podID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerID="1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" exitCode=137 | |
Feb 09 02:26:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:26:00.664440 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:26:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:26:00.664269 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:25:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:59.641297 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:25:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:59.641132 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:25:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:58.950891 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:58.950472 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerName="node-cache" containerID="containerd://1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9" gracePeriod=2 | |
Feb 09 02:25:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:58.950358 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:c8d2b10c1762dea123d298314594d82995a1867ba574f79269961b678c709d02} | |
Feb 09 02:25:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:58.950330 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:cc0d0ad1d76ff40a5447e9e12c3d5d3bb2cda4dc4b6c79dc75d0973870a87235} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:57.968071 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.967899 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.946689 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.946388 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.946142 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.946032 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.945370 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerStarted Data:1e4a34477afa20b5e05f5720dad269c223c70d6967e39c258493b781e1fbe4c9} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.943179 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:18d6a31cd40bf4aacd24609cab6d651de23ff8a956033777711d5558fb1c12b3} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.943170 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:555d25e3633b0069771dfda68134e19e5531402ea5d889eb5aee4d4e5d48a93f} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.943160 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:c49b4e1cbd1bd4bde772d0d096f64f8fa8f52cac2436ce385dd772f58524c028} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.943133 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:cebe89c41b4641f0151645b27691af4ac97d4c072dd9eee3d528f822eea65c85} | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.664428 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640739 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640733 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640728 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640723 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640717 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:25:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:57.640702 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.939147 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.939120 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.938991 13938 scope.go:115] "RemoveContainer" containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.938849 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerStarted Data:2857a50c76f3922a42670e3d320b0355825a795286a70b7b7bd930d68c621de7} | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806977 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806964 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806752 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806562 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806384 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.806095 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.795630 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.640981 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.640653 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.640647 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:56.640631 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:25:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:56.000275 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936539 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936289 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-proxy-hq9nf" | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936142 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936131 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2db8239b182388750b441a2a92e37329ab5eda17765f5edb4fe8cc7c05bb7f33" | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936123 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:2db8239b182388750b441a2a92e37329ab5eda17765f5edb4fe8cc7c05bb7f33} | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936106 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3} | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.936077 13938 generic.go:296] "Generic (PLEG): container finished" podID=58d10739-95b9-4783-bef2-7f61e6690f70 containerID="1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" exitCode=2 | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:55.641176 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 containerName="kube-proxy" containerID="containerd://1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3" gracePeriod=30 | |
Feb 09 02:25:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:55.130087 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:53.641225 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:53.640961 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:52.641008 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:25:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:52.640794 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:25:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:49.794928 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:49.641555 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:25:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:49.641043 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:25:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:47.640759 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:25:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:47.640544 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.454138 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.454124 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.453925 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.453764 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.453599 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:46.453356 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:45.663140 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:25:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:45.662956 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:25:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:45.640615 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:25:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:45.640482 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:25:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:45.122643 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:43.641813 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641571 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641296 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641288 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641280 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641271 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641259 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641238 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:25:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:43.641187 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:42.793509 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:41.672151 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:25:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:41.671821 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:25:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:41.671815 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:25:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:41.671798 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:25:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:39.641062 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:25:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:39.640868 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:25:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:38.640524 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:25:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:38.640274 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.664236 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:36.664013 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.362857 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.362841 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.362504 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.362199 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.362042 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:36.361896 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:35.792243 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:35.115526 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:34.663812 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:25:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:34.663688 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:25:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:33.641025 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:33.640872 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:32.656699 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:25:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:32.656466 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:29.664759 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664209 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664204 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664199 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664194 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664187 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:25:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:29.664152 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:25:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:28.791544 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:27.671381 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:25:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:27.671145 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.672100 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:26.671709 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:26.671702 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:26.671668 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.156679 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.156665 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.156437 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.156076 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.155826 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:26.155665 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:25.675679 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:25:25 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:25.675469 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:25:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:25.107686 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:23.668093 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:25:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:23.667983 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:25:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:23.640959 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:23.640527 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:22.640832 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:25:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:22.640660 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:25:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:21.790503 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:20.660502 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:25:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:20.660302 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.794822 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.794806 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.794587 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.794315 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.794095 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.793823 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.672289 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671824 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671819 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671814 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671808 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671801 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:15.671784 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:25:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:15.101326 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:14.789214 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:13.641765 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:13.641500 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:25:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:13.641434 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:13.641361 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:12.671398 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:12.671181 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:12.641184 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:12.640877 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:12.640871 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:25:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:12.640854 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:25:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:10.656194 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:25:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:10.656060 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:25:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:09.641165 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:25:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:09.640975 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:25:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:08.644243 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:25:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:08.644052 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:25:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:07.787995 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.639125 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.639109 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.638907 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.638724 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.638476 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.638173 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:05.093872 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:03.668142 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667692 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667687 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667681 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667676 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667670 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.667653 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.640815 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.640588 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:25:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:03.607399 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:25:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:02.640645 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:25:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:25:02.640495 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:25:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:25:00.786571 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:58.660402 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:58.660229 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:58.640966 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:58.640444 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:58.640438 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:24:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:58.640420 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:24:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:57.671338 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:24:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:57.671205 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:24:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:57.640708 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:24:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:57.640527 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.502509 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.502494 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.502291 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.502052 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.501792 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.501483 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:55.084734 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:54.640622 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:24:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:54.640406 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:24:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:54.385840 13938 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:53.640951 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:53.640697 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:51.526360 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:24:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:51.526199 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:24:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:51.526016 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:24:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:51.185364 13938 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:49.584684 13938 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.788380 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.788246 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.783347 13938 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.770316 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.656271 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655835 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655830 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655825 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655818 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655812 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.655795 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.383051 13938 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.181949 13938 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:48.181719 13938 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.181688 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.181389 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.181201 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.181059 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:48.180894 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:46.943199 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.943020 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.925568 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:46.675696 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.675401 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.675395 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.675379 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:46.656391 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.656280 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:46.640372 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:24:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:46.640190 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.820289 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:45.820170 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.819980 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:45.819807 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.113885 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.113870 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.113664 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.113405 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.113183 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.112874 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:45.076209 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.818758 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:44.818512 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.818354 13938 scope.go:115] "RemoveContainer" containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.818202 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerStarted Data:ca3600d03f88fed1e2d92e43c7fbe2fab41dd9599e3f923aab3110e21ce301c8} | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:44.817065 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.816870 13938 scope.go:115] "RemoveContainer" containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.816785 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.816419 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerStarted Data:273195c3fd30b5d41d99014db7c67914953eb128d1480470136d80ab4f87b125} | |
Feb 09 02:24:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:44.708273 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:43.927798 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:43.917135 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.811605 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.811423 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.811274 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="689271bb93c468e6bf5aa086660866065ff56f0fc8a30f612501ee83f67807a5" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.811259 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:689271bb93c468e6bf5aa086660866065ff56f0fc8a30f612501ee83f67807a5} | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.810132 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809763 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809590 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809572 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3eec07064eadc8f0f502598440e7d2db1a6399bd36c5fc96d4b4eb9d7b11cc21" | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809519 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:3eec07064eadc8f0f502598440e7d2db1a6399bd36c5fc96d4b4eb9d7b11cc21} | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809493 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807} | |
Feb 09 02:24:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:43.809436 13938 generic.go:296] "Generic (PLEG): container finished" podID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerID="c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" exitCode=0 | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.806796 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.806775 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c} | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.806746 13938 generic.go:296] "Generic (PLEG): container finished" podID=fff05cf554f139c875ab310b098fe537 containerID="b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" exitCode=0 | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650586 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650465 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650460 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650211 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650207 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650144 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650058 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650035 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.650006 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.649991 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:42.649950 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:24:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:41.525384 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" probeResult=failure output="Get \"http://10.0.74.64:8081/healthz\": dial tcp 10.0.74.64:8081: connect: connection refused" | |
Feb 09 02:24:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:38.770433 13938 prober.go:114] "Probe failed" probeType="Readiness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" probeResult=failure output="Get \"http://10.0.74.64:8081/healthz\": dial tcp 10.0.74.64:8081: connect: connection refused" | |
Feb 09 02:24:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:38.667871 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:24:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:38.667673 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.797926 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerName="coredns" containerID="containerd://c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807" gracePeriod=30 | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:37.668542 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668089 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668084 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668077 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668070 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668064 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:24:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:37.668047 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:24:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:36.926184 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:24:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:36.797250 13938 prober.go:114] "Probe failed" probeType="Readiness" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerName="coredns" probeResult=failure output="HTTP probe failed with statuscode: 503" | |
Feb 09 02:24:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:36.796685 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:24:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:36.796436 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerStarted Data:c5931a318ee4c6e6b74912e49bb01a4e4f3fdef2db7ba6eac33ea93f800dd807} | |
Feb 09 02:24:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:36.640316 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:24:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:35.663735 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:24:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:35.663564 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:24:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:35.068684 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:33.641335 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:24:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:33.641000 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:24:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:33.640994 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:24:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:33.640978 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:24:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:32.640100 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" containerID="containerd://b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c" gracePeriod=30 | |
Feb 09 02:24:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:31.641008 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:24:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:31.640856 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:24:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:26.779469 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerStarted Data:1b7660d09e454b37a85633579dce0b26b88c4a71779e6cdff512df7c88bbcec3} | |
Feb 09 02:24:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:26.663948 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:24:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:25.640783 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:24:25 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:25.640553 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:24:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:25.060139 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:23.640881 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:24:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:23.640693 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:22.663083 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.662888 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:22.642605 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.642057 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.641964 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.641864 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.641756 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.641644 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:24:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:22.641299 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:24:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:20.641491 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:24:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:20.641008 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:24:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:20.641001 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:24:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:20.640982 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:24:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:19.656724 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:24:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:19.656250 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:24:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:15.052439 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:13.663070 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:24:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:13.662870 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:11.687676 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687221 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687215 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687210 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687203 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687194 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.687173 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:11.686478 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.686115 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:11.640866 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:24:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:11.640650 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:08.656416 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:08.656218 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:08.640601 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:08.640291 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:08.640284 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:24:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:08.640267 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:24:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:06.643283 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:24:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:06.643146 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:24:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:05.045429 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:24:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:03.606453 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:24:00.672213 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671761 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671755 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671748 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671743 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671737 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:24:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:24:00.671719 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:23:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:59.641312 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:23:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:59.641109 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:58.663540 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:58.663383 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:58.339791 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:58.339499 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:58.339494 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:58.339484 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:58.339274 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:23:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:56.659989 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:23:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:56.659810 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:23:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:55.038393 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:53.663544 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:23:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:53.663369 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:52.725945 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:52.725622 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:52.725615 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:52.725598 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:52.641381 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:52.641230 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:51.724650 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:51.724304 13938 scope.go:115] "RemoveContainer" containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" | |
Feb 09 02:23:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:51.724296 13938 scope.go:115] "RemoveContainer" containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" | |
Feb 09 02:23:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:51.724281 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:51.724072 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:a700b0e757c6dea93d3c8fc7dd28a7d283fabf1e34cecec69361c042ecfb9a42} | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:50.792851 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.729346 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720248 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720180 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720170 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ac65fc0f1573b17e10e845d48e2334a0266d69b2461779795faa290ec937d966" | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720162 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:ac65fc0f1573b17e10e845d48e2334a0266d69b2461779795faa290ec937d966} | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720153 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8} | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720138 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5} | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720121 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" exitCode=2 | |
Feb 09 02:23:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:50.720103 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" exitCode=2 | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:49.921924 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:49.921613 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:49.716291 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="cinder-csi-plugin" containerID="containerd://48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5" gracePeriod=30 | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:49.716231 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="liveness-probe" containerID="containerd://052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8" gracePeriod=30 | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:49.716077 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:48aa5528c0748b95f5e7a0be0b17635287db957a7a3dcb7e0400053dadc06fe5} | |
Feb 09 02:23:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:49.716053 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:052053218f0c91f8f62a28d47117cbc051ccdda94c5ec513854daecfced148e8} | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:48.792578 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:48.671300 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.671119 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:48.641179 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640614 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640609 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640604 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640598 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640591 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640580 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640434 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640427 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:23:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:48.640411 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:45.030572 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:44.671956 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:23:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:44.671741 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:23:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:43.640596 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:23:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:43.640324 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:23:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:41.668487 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:23:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:41.668292 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:23:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:39.641024 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:39.640894 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:36.671946 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671500 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671495 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671490 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671485 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671479 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:23:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:36.671460 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:35.671311 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:35.671115 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:35.641196 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:35.640787 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:35.640775 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:35.640746 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:35.022163 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:31.664146 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:23:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:31.663978 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:23:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:30.640652 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:23:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:30.640486 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:23:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:29.667684 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:23:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:29.667495 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:23:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:28.640406 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:28.640280 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:25.015586 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:22 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:22.656139 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:22.655836 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:23:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:22.655830 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:23:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:22.655813 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:21.671465 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.671284 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:21.641714 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641247 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641242 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641237 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641230 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641224 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:23:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:21.641207 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:23:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:19.660293 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:23:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:19.660155 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:23:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:18.675747 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:23:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:18.675558 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:23:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:16.663578 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:16.663425 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:15.663092 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:23:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:15.662904 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:23:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:15.008568 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:10.663760 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:23:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:10.663446 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:23:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:10.663440 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:23:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:10.663422 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:23:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:09.667286 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:23:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:09.667090 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:08.640937 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:08.640820 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640585 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640475 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640470 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640465 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640459 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640453 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:23:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:08.640433 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:23:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:05.645783 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:05.645669 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:05.000613 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:23:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:04.645976 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:04.645848 13938 scope.go:115] "RemoveContainer" containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" | |
Feb 09 02:23:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:04.642917 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerStarted Data:a2f9a847a634a570516d6c9667027f6a6eda03be64c15165011ac00c4a132495} | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:03.778130 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:03.667497 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.667313 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639290 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639144 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639125 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="03e016d262da7021303bb0dc85e5eebd407a99a6a24f2158e13886c04253c62b" | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639116 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerDied Data:03e016d262da7021303bb0dc85e5eebd407a99a6a24f2158e13886c04253c62b} | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639101 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerDied Data:15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4} | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.639075 13938 generic.go:296] "Generic (PLEG): container finished" podID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f containerID="15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" exitCode=2 | |
Feb 09 02:23:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:03.605945 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:23:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:02.636463 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f containerName="snapshot-controller" containerID="containerd://15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4" gracePeriod=30 | |
Feb 09 02:23:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:02.636309 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerStarted Data:15d0d2591390bb9199ce64940ff50e38b37185b5c4408dd6c056871ddb6f48b4} | |
Feb 09 02:23:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:23:01.664460 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:23:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:01.664267 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:23:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:23:01.640790 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:22:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:57.641020 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:22:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:57.640637 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:22:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:57.640629 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:22:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:57.640609 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:56.640855 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640393 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640388 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640381 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640376 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640369 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:22:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:56.640352 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:22:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:54.993995 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:54.664068 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:22:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:54.663929 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:22:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:54.641015 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:22:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:54.640818 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:22:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:51.640433 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:22:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:51.640217 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:22:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:51.618368 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:22:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:51.618219 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:22:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:50.616939 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:22:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:50.616755 13938 scope.go:115] "RemoveContainer" containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" | |
Feb 09 02:22:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:50.616599 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:c489efd31ee24de5f2deb2a23f7d388c2d5a0797f4a0f683591717b365c1fb20} | |
Feb 09 02:22:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:50.616568 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="c489efd31ee24de5f2deb2a23f7d388c2d5a0797f4a0f683591717b365c1fb20" exitCode=0 | |
Feb 09 02:22:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:49.612657 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerStarted Data:de5ccab9982aa56078ab21d96fafb386bcfb8e7d5ca97cded54c37318edba0aa} | |
Feb 09 02:22:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:49.612631 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:4c53b0f16a1f32ae20964a6d88eb3562a12e5169125cfc12237d9749a610a54f} | |
Feb 09 02:22:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:49.612584 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="4c53b0f16a1f32ae20964a6d88eb3562a12e5169125cfc12237d9749a610a54f" exitCode=0 | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.629286 13938 scope.go:115] "RemoveContainer" containerID="45d4ac8c2740b18492afabffa8421e13f156788bd087fddd82e74117bc06f8cc" | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.610782 13938 scope.go:115] "RemoveContainer" containerID="072d86c54864316c8beb3a0ede1d944048703c65b240e3cc24af0756fd4f2718" | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609667 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-flannel-x85zd" | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609482 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609471 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1bb05fbba61614c3bda019a9d7483afa6280122c33bd081afedeb5dd078feb1c" | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609463 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:1bb05fbba61614c3bda019a9d7483afa6280122c33bd081afedeb5dd078feb1c} | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609447 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec} | |
Feb 09 02:22:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:48.609419 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" exitCode=0 | |
Feb 09 02:22:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:47.640581 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerName="kube-flannel" containerID="containerd://deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec" gracePeriod=30 | |
Feb 09 02:22:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:46.641088 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:22:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:46.640960 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:44.987260 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:44.641455 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.641011 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.641007 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.641002 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.640997 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.640990 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:22:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:44.640973 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:42.679778 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:42.679461 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:42.679455 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:42.679439 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:42.656756 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:22:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:42.656591 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:22:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:39.641084 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:22:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:39.640871 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:22:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:37.663133 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:22:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:37.662911 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:22:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:34.979450 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:32.640490 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:22:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:32.640376 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:22:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:30.640745 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:22:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:30.640437 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:22:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:30.640431 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:22:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:30.640416 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:29.641521 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640987 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640981 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640975 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640969 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640961 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:22:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:29.640943 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:22:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:28.655800 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:22:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:28.655659 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:22:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:26.640891 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:22:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:26.640706 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:22:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:24.971255 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:24.671428 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:22:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:24.671217 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:22:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:17.640486 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:22:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:17.640292 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:16.641262 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:16.641210 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:16.641058 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:16.640952 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:16.640946 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:22:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:16.640929 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:15.672254 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671797 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671791 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671786 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671780 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671774 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:22:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:15.671755 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:22:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:14.963378 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:12.640457 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:22:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:12.640257 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:22:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:11.667826 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:22:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:11.667623 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:04.954801 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:04.667523 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:04.667225 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:04.667218 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:04.667203 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:04.666274 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:22:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:04.666133 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:03.641773 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641303 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641298 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641293 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641288 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641281 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.641265 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:22:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:03.605818 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:22:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:02.664351 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:22:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:02.664207 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:22:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:22:00.641492 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:22:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:22:00.641278 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:57.664146 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:21:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:57.663950 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:21:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:54.946928 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:50.655899 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:21:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:50.655755 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:49.667517 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.667335 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:49.643001 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641640 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641634 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641628 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641619 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641545 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:49.641361 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641282 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641023 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.641016 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:49.640995 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:21:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:47.671708 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:21:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:47.671582 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:21:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:45.671293 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:21:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:45.670950 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:21:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:44.936809 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:39.663467 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:21:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:39.663311 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:38.663331 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.663126 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:38.641319 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640785 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640777 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640761 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640754 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640746 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:21:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:38.640727 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:21:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:35.490062 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:35.489768 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:35.489761 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:35.489745 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:34.928111 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:34.488326 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:34.488014 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:34.488008 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:34.487994 13938 scope.go:115] "RemoveContainer" containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" | |
Feb 09 02:21:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:34.487808 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:ac65fc0f1573b17e10e845d48e2334a0266d69b2461779795faa290ec937d966} | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:33.547024 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484281 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484129 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484116 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="42fcb11fab5288a2ffcaecc9c993a1d4aca07ad2d1b6456219d929064f02071b" | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484107 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:42fcb11fab5288a2ffcaecc9c993a1d4aca07ad2d1b6456219d929064f02071b} | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484088 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad} | |
Feb 09 02:21:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:33.484057 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" exitCode=2 | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:32.809019 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:32.808792 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:32.808777 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:32.672624 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:32.672506 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:21:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:32.640364 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="node-driver-registrar" containerID="containerd://b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad" gracePeriod=30 | |
Feb 09 02:21:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:31.304414 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeReady" | |
Feb 09 02:21:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:31.304404 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasSufficientPID" | |
Feb 09 02:21:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:31.304395 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasNoDiskPressure" | |
Feb 09 02:21:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:31.304374 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasSufficientMemory" | |
Feb 09 02:21:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:30.640437 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:21:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:30.640229 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:21:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:30.476058 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:21:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:30.475768 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:21:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:30.475473 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerStarted Data:b3f187ed289fc25ba52fbeed7208bdd1f0208ed094b314059454ffcf17d5b93c} | |
Feb 09 02:21:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:29.679433 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:21:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:29.217946 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"snapshot-controller-7d445c66c9-v9z66", UID:"4d4fcb97-a65e-43e4-a2c8-ac710c48704f", APIVersion:"v1", ResourceVersion:"1021", FieldPath:"spec.containers{snapshot-controller}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 13, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 21, 6, 641082798, time.Local), Count:52, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:21:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:27.663375 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:21:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:27.663182 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:21:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:27.424533 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:21:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:25.663771 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:21:25 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:25.663580 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:24.920502 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:24.664956 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664484 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664478 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664474 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664467 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664461 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:21:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:24.664443 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.641821 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.641657 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.641465 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.641230 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.641014 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.640804 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.640516 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.464244 13938 status_manager.go:667] "Failed to get status for pod" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 pod="kube-system/kube-flannel-x85zd" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-flannel-x85zd\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:23.463804 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerStarted Data:deb8691427238de70729bbb26077212b786eafe79ef058280df997c0189a51ec} | |
Feb 09 02:21:22 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:22.671140 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.640977 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:21.640699 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:21.640679 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.077314 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.077300 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.077119 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.076942 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.076693 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:21.076412 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:20.660315 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:21:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:20.660170 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:21:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:20.423663 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:21:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:19.216862 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"snapshot-controller-7d445c66c9-v9z66", UID:"4d4fcb97-a65e-43e4-a2c8-ac710c48704f", APIVersion:"v1", ResourceVersion:"1021", FieldPath:"spec.containers{snapshot-controller}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 13, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 21, 6, 641082798, time.Local), Count:52, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:21:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:16.640602 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:21:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:16.640399 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:21:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:14.912227 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:14.663745 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:21:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:14.663596 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.642701 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.642263 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.642046 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.641780 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.641445 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:13.641113 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:13.422869 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:21:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:12.641074 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:21:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:12.640926 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.668515 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667920 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667914 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667910 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667904 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667897 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.667862 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.641176 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:11.640730 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.048830 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.048815 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.048593 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.048352 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.047971 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:11 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:11.047549 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:09.663624 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:21:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:09.663437 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:21:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:09.215632 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"snapshot-controller-7d445c66c9-v9z66", UID:"4d4fcb97-a65e-43e4-a2c8-ac710c48704f", APIVersion:"v1", ResourceVersion:"1021", FieldPath:"spec.containers{snapshot-controller}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 13, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 21, 6, 641082798, time.Local), Count:52, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:06.641589 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"snapshot-controller-7d445c66c9-v9z66", UID:"4d4fcb97-a65e-43e4-a2c8-ac710c48704f", APIVersion:"v1", ResourceVersion:"1021", FieldPath:"spec.containers{snapshot-controller}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 6, 13, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 21, 6, 641082798, time.Local), Count:52, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/snapshot-controller-7d445c66c9-v9z66.17b20f9c5eb1991f": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:06.641131 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:06.640939 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:06.438599 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:06.438509 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:06.438384 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:06.438368 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:06.438115 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:b68c1cb3ad60f15f07d4b86169be05e99418a0e7ee935420cd9b9c60758e39ad} | |
Feb 09 02:21:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:06.422069 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:21:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:05.722444 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:21:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:05.659975 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:21:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:05.659968 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:21:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:05.659952 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:21:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:04.905285 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:21:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:04.643836 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:21:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:04.643638 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.642092 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.641902 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.641702 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.641437 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.641209 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.640960 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:03.605268 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:21:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:01.640976 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:21:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:01.640728 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.778178 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.778164 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.777993 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.777698 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.777298 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.776777 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.584268 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583831 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583826 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583820 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583814 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583808 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.583790 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.566779 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:21:00.252822 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.252628 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:21:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:21:00.252377 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:20:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:59.641406 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:20:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:59.641183 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:20:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:59.420641 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:58.641204 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:20:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:58.641020 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:20:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:58.307333 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:20:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:58.307038 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:20:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:58.306705 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:20:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:55.431699 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:20:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:55.431508 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:54.898550 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:54.655976 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:54.655850 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:54.412050 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:54.411973 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:54.411856 13938 scope.go:115] "RemoveContainer" containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" | |
Feb 09 02:20:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:54.411690 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerStarted Data:b8c73ad85c95e99229cced82750e052f3a394ee171eb3dc5e0213939869af7f5} | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.642052 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.641884 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.641720 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.641506 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.641298 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:53.641195 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.641002 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.640983 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:53.473190 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.409571 13938 status_manager.go:667] "Failed to get status for pod" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 pod="kube-system/nodelocaldns-kn4r6" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-kn4r6\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.409150 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.409000 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.408989 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="74c879a4e2b8c91473112482c08ff2c0f226a0c70900f5047fa93446c0658519" | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.408981 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:74c879a4e2b8c91473112482c08ff2c0f226a0c70900f5047fa93446c0658519} | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.408964 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955} | |
Feb 09 02:20:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:53.408934 13938 generic.go:296] "Generic (PLEG): container finished" podID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerID="41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" exitCode=0 | |
Feb 09 02:20:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:52.640865 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerName="node-cache" containerID="containerd://41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955" gracePeriod=2 | |
Feb 09 02:20:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:52.419461 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.640613 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.640611 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:50.640446 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:50.640288 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:50.640281 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:50.640265 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.554414 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.554398 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.554144 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.553891 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.553574 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:50.553126 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:49.401808 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401334 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401329 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401325 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401318 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401312 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:20:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:49.401297 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:48.420253 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419795 13938 scope.go:115] "RemoveContainer" containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419788 13938 scope.go:115] "RemoveContainer" containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419782 13938 scope.go:115] "RemoveContainer" containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419776 13938 scope.go:115] "RemoveContainer" containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419769 13938 scope.go:115] "RemoveContainer" containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.419751 13938 scope.go:115] "RemoveContainer" containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.399762 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:48.399341 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:dc393a39999347962a6f302821aff146e801bb98498b122733ecb0d753f5425d} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:47.663543 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.663369 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:47.529418 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.480309 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.467210 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.445277 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.434878 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.413426 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395453 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395248 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395103 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395091 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e043e53c3806ad1919fb1c9964a9700b9a90724d5cec4dfd8ba955a72da136dd" | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395084 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:e043e53c3806ad1919fb1c9964a9700b9a90724d5cec4dfd8ba955a72da136dd} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395076 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395067 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395058 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395049 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395036 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.395002 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1} | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394981 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" exitCode=2 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394973 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" exitCode=2 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394959 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" exitCode=2 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394951 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" exitCode=2 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394943 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" exitCode=2 | |
Feb 09 02:20:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:47.394924 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" exitCode=2 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640435 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-provisioner" containerID="containerd://f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1" gracePeriod=30 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640442 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="liveness-probe" containerID="containerd://40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507" gracePeriod=30 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640421 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-snapshotter" containerID="containerd://413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772" gracePeriod=30 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640415 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-resizer" containerID="containerd://c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06" gracePeriod=30 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640416 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" containerID="containerd://b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2" gracePeriod=30 | |
Feb 09 02:20:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:46.640356 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-attacher" containerID="containerd://e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97" gracePeriod=30 | |
Feb 09 02:20:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:45.418460 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:44.891816 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:43.663543 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.663278 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.642240 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.641903 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.641674 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.641433 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:43.641121 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.641129 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:40.640897 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.482831 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.482816 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.482520 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.482335 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.482159 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:40.481608 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:39.664218 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:20:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:39.663912 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:20:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:39.663905 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:20:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:39.663887 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:20:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:38.663680 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:20:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:38.663485 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:20:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:38.417971 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:36.656052 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:20:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:36.655921 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:20:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:36.640523 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:20:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:36.640364 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:20:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:34.879088 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:33.641846 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:33.641630 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:33.641410 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:33.641213 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:33.640829 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:31.671596 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:20:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:31.671372 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:20:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:31.417416 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.350626 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.350610 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.350469 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.350323 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.350163 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:30.349860 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:28.664515 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:20:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:28.664379 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:20:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:25.641386 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:20:25 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:25.641133 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:20:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:24.866492 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:24.416398 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:23.707528 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.707380 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:23.692196 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.691902 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.691896 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.691880 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:23.672260 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.672100 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.641547 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.641352 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.641074 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.640860 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:23.640677 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.168370 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.168354 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.168214 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.168061 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.167640 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:20.167150 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:19.663912 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:20:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:19.663736 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:20:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:17.415189 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:14.853638 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:14.640702 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:20:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:14.640565 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:20:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:13.641827 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:13.641637 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:13.641416 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:13.641100 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:13.640770 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:12.656088 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:20:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:12.655955 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:20:12 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:12.640351 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:20:12 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:12.640206 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.656505 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:10.656305 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.413961 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.135784 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.135768 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.135431 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.135131 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.134996 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:10.134843 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:09 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:09.664689 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:20:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:09.664387 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:20:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:09.664380 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:20:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:09.664362 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:20:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:08.643445 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:20:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:08.643234 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:20:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:04.841132 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:03.675964 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.675847 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.641893 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.641673 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.641429 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.641135 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.640808 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:03.605089 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:20:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:20:03.413261 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:20:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:20:01.567711 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" probeResult=failure output="HTTP probe failed with statuscode: 500" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.856392 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.856382 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.856179 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.855962 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.855596 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:59.855288 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:58.663695 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:19:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:58.663544 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:19:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:57.659711 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:57.659570 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:19:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:57.640942 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:57 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:57.640740 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:19:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:56.412150 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:55.667561 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:55.667361 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:55.641259 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:55.640890 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:55.640881 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:19:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:55.640862 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:19:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:54.829735 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:53.641921 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:53.641764 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:53.641593 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:53.641347 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:53.641024 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:50.668133 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:19:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:50.668011 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.618578 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.618561 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.618352 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.618145 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.617942 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.617667 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:49.410847 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:44.817749 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:44.728310 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:44.728121 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:44.707536 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:44.679554 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:44.679406 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:44.641007 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:19:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:44.640841 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:43.660006 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.659846 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.641753 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.641560 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.641310 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.640848 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:43.640531 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:42.410262 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:40.656458 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:19:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:40.656109 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:19:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:40.656102 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:19:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:40.656084 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.518864 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.518846 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.518620 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.518316 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.517973 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:39 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:39.517582 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:38.281668 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:38.281469 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:19:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:37.299757 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:37.299569 13938 scope.go:115] "RemoveContainer" containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" | |
Feb 09 02:19:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:37.280290 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:37.279899 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerStarted Data:3eec07064eadc8f0f502598440e7d2db1a6399bd36c5fc96d4b4eb9d7b11cc21} | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.926155 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:36.664228 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.664068 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:36.376640 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277362 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277209 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277051 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277039 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="62a3657fe21fdfb3a601da20423144192e2d80e8b9b738620613b6d096ef19f9" | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277029 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:62a3657fe21fdfb3a601da20423144192e2d80e8b9b738620613b6d096ef19f9} | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.277011 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e} | |
Feb 09 02:19:36 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:36.276981 13938 generic.go:296] "Generic (PLEG): container finished" podID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerID="f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" exitCode=0 | |
Feb 09 02:19:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:35.408966 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:34.804194 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:33.679613 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.679370 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.642832 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.642642 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.642405 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.642136 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:33.641753 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:32.641026 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:32.640858 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.266994 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerName="coredns" containerID="containerd://f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e" gracePeriod=30 | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.266089 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.265042 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:40cc14af77b0804c93218c1ff610c5b7d4810dedb01674b70468ed98b8b8a507} | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.265028 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:e18b2d54c9ddf98c9297e34ede4c6f576f2ca4bf297e2e281fb7c0e0252dae97} | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.265017 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:413c4d73ad350b2d9c0348666361f0eeb8501c47436ee41abc5b086f714ff772} | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.265000 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:b0bc23319b77becbb4f7bbed7f83d417ed14d5a09a4ff5b72bbcf982375c87d2} | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.264973 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:c7453b578c5dc8c70e6c07792607450bd4d4fa588e20d4c998602572fae81d06} | |
Feb 09 02:19:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:30.264898 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:f1c4c6ca6392daf9e10e9df7372560a05d52d7309ba4672a7da7a904a699b9c1} | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663747 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663742 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663737 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663731 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663725 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:29.663709 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.402881 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.402870 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.402678 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.402480 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.402255 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:29.401876 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:28.640441 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.640301 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:28.407737 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.257830 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.257640 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.257618 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.257236 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:19:28 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:28.257023 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerStarted Data:f7867c4bdfdb4b98d565ec97d7eaeaa22d2fe5484365ec500428e97355ef364e} | |
Feb 09 02:19:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:27.668178 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:19:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:27.641331 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:19:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:27.641021 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:19:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:27.641014 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:19:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:27.640998 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:19:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:24.790684 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:24.664041 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:19:24 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:24.663926 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:19:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:23.641956 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:23.641726 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:23.641332 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:21.663716 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:19:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:21.663550 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:19:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:21.406657 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:20.244679 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:20.244550 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:19.243565 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.243282 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:19.243154 13938 scope.go:115] "RemoveContainer" containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:19.243004 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerStarted Data:2db8239b182388750b441a2a92e37329ab5eda17765f5edb4fe8cc7c05bb7f33} | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077932 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077923 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077768 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077579 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077353 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:19.077031 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:18.306269 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240841 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240610 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-proxy-hq9nf" | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240438 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240429 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e977801c98ae5da316db2a32afb91ff40ce62f1c435f01f3f88012d096683a56" | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240422 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:e977801c98ae5da316db2a32afb91ff40ce62f1c435f01f3f88012d096683a56} | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240406 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696} | |
Feb 09 02:19:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:18.240384 13938 generic.go:296] "Generic (PLEG): container finished" podID=58d10739-95b9-4783-bef2-7f61e6690f70 containerID="cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" exitCode=2 | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:17.642234 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641795 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641790 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641785 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641779 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641773 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.641759 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.238137 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.237904 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 containerName="kube-proxy" containerID="containerd://cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696" gracePeriod=30 | |
Feb 09 02:19:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:17.237755 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerStarted Data:cab82aea95948d5fc5e7d63e9e3b7343ec0a9a6ba4f30587eb770cd952725696} | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:16.702816 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.702627 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:16.701092 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.700931 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:16.641475 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.641091 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.641055 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.641044 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:19:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:16.641009 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:19:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:14.782335 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:14.405928 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:13.660433 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:19:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:13.660306 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:19:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:13.640949 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:13.640569 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800850 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800835 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800628 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800411 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800220 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:08.800016 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:07.641661 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:19:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:07.641313 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:19:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:07.404698 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:05.663896 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.663753 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:05.641861 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.641030 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.641023 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.641017 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.641010 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.641003 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:19:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:05.640983 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:19:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:04.775237 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:19:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:04.643700 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:19:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:04.643380 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:19:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:04.643373 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:19:04 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:04.643355 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:19:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:03.663699 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:19:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:03.663497 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:19:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:03.641724 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:03.641408 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:19:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:03.604624 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:19:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:01.684018 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:19:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:01.683862 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:19:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:01.656395 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:19:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:19:01.656263 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:19:01 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:19:01.004100 13938 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.791402 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.791388 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.791203 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.791003 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.790757 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:58.790474 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:57 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:57.803107 13938 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:56.202424 13938 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:55.401185 13938 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:55.000201 13938 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.799185 13938 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:54.798982 13938 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.798959 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.798715 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.798434 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.798180 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.797798 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:18:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:54.767341 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:53.641396 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:53.641029 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:52.656428 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:18:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:52.656255 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:51.667830 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667379 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667374 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667369 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667363 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667357 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.667341 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:51.543579 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.543432 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:18:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:51.525707 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:18:50 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:50.208103 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:50.207785 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:18:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:50.207779 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:18:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:50.207762 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:49.671239 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:49.671050 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:49.208148 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:49.207843 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:49.207837 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:18:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:49.207823 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.791109 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.790953 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.769955 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.736826 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.736814 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.736645 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.736470 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.736158 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.735853 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.338812 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:48.211769 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.211448 13938 scope.go:115] "RemoveContainer" containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.211441 13938 scope.go:115] "RemoveContainer" containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.211423 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.188679 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:48.188255 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:42fcb11fab5288a2ffcaecc9c993a1d4aca07ad2d1b6456219d929064f02071b} | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:47.258634 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:47.211102 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.210959 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.196307 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.185393 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184944 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184805 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184796 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="486e18f4fa4991ec5f4e6fc9eac8718cb9661f9f8bb3d8284d7ad1db32536164" | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184788 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:486e18f4fa4991ec5f4e6fc9eac8718cb9661f9f8bb3d8284d7ad1db32536164} | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184777 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0} | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184759 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94} | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184547 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" exitCode=2 | |
Feb 09 02:18:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:47.184532 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" exitCode=2 | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:46.868727 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.868403 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:46.699555 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.699439 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:46.677456 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.677276 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.640769 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="cinder-csi-plugin" containerID="containerd://913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0" gracePeriod=30 | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.640755 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="liveness-probe" containerID="containerd://747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94" gracePeriod=30 | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:46.180027 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.180010 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.179877 13938 scope.go:115] "RemoveContainer" containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" | |
Feb 09 02:18:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:46.179726 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerStarted Data:689271bb93c468e6bf5aa086660866065ff56f0fc8a30f612501ee83f67807a5} | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:45.255779 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176848 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176615 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176467 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176453 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b59ed63fc2f1df578bcfd906b4a34290a525300531c9ff33cbcd0c40453ef06a" | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176445 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:b59ed63fc2f1df578bcfd906b4a34290a525300531c9ff33cbcd0c40453ef06a} | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176426 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee} | |
Feb 09 02:18:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:45.176397 13938 generic.go:296] "Generic (PLEG): container finished" podID=fff05cf554f139c875ab310b098fe537 containerID="439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" exitCode=0 | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:44.760370 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647477 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647458 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647437 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647375 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647349 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647314 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647287 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647264 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647232 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647220 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:44.647181 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:18:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:42.170673 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:18:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:42.170488 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:18:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:41.525891 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" probeResult=failure output="Get \"http://10.0.74.64:8081/healthz\": dial tcp 10.0.74.64:8081: connect: connection refused" | |
Feb 09 02:18:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:41.169314 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-x85zd_kube-system(2b2895a4-d60a-407d-b45d-fb8c7db4e374)\"" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 | |
Feb 09 02:18:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:41.169135 13938 scope.go:115] "RemoveContainer" containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" | |
Feb 09 02:18:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:41.168985 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:45d4ac8c2740b18492afabffa8421e13f156788bd087fddd82e74117bc06f8cc} | |
Feb 09 02:18:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:41.168938 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="45d4ac8c2740b18492afabffa8421e13f156788bd087fddd82e74117bc06f8cc" exitCode=0 | |
Feb 09 02:18:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:40.165074 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerStarted Data:1bb05fbba61614c3bda019a9d7483afa6280122c33bd081afedeb5dd078feb1c} | |
Feb 09 02:18:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:40.165060 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:072d86c54864316c8beb3a0ede1d944048703c65b240e3cc24af0756fd4f2718} | |
Feb 09 02:18:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:40.165028 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="072d86c54864316c8beb3a0ede1d944048703c65b240e3cc24af0756fd4f2718" exitCode=0 | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.179880 13938 scope.go:115] "RemoveContainer" containerID="ed9927ac835beaf2b3664af5acf4e07b56727186543db1ace6b8c117a6af77b0" | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.162726 13938 scope.go:115] "RemoveContainer" containerID="7e001f0b6f22f01ba070f465a0b6954ca5355ebd664b4c65cf69449acc0e88f5" | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.162174 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-flannel-x85zd" | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.162023 13938 scope.go:115] "RemoveContainer" containerID="c147efa175dc805b27085d296ad1b5fc2b186c09997f71ed617fe932e35933a0" | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.162010 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d309a9f0dc7853c7c2ad9fe110b90ec7a9e9cd8fa0afeeb0cca834c76f78bb6d" | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.162002 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:d309a9f0dc7853c7c2ad9fe110b90ec7a9e9cd8fa0afeeb0cca834c76f78bb6d} | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.161984 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-flannel-x85zd" event=&{ID:2b2895a4-d60a-407d-b45d-fb8c7db4e374 Type:ContainerDied Data:9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c} | |
Feb 09 02:18:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:39.161954 13938 generic.go:296] "Generic (PLEG): container finished" podID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerID="9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" exitCode=0 | |
Feb 09 02:18:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:38.781735 13938 prober.go:114] "Probe failed" probeType="Readiness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" probeResult=failure output="Get \"http://10.0.74.64:8081/healthz\": dial tcp 10.0.74.64:8081: connect: connection refused" | |
Feb 09 02:18:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:38.640978 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/kube-flannel-x85zd" podUID=2b2895a4-d60a-407d-b45d-fb8c7db4e374 containerName="kube-flannel" containerID="containerd://9167b1bc469bd14f0d59df59e64df090ae1bd29cd9f084b249db73e3110c1d9c" gracePeriod=30 | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:37.667891 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667397 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667392 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667386 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667381 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667374 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:18:37 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:37.667355 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:18:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:35.663681 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:18:35 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:35.663478 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:18:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:34.750292 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:34.641301 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:18:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:34.641167 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:18:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:34.641043 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" containerID="containerd://439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee" gracePeriod=30 | |
Feb 09 02:18:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:33.641183 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:33.641116 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:18:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:33.640937 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:18:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:33.640810 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:24.739582 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:23.671844 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671392 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671386 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671381 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671375 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671369 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:18:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:23.671352 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:18:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:21.680390 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:18:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:21.680213 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:18:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:21.641457 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:18:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:21.641269 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:18:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:19.641339 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:18:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:19.641156 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:18:19 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:19.124000 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:19 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:19.123721 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:18 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:18.123016 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:18.122713 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:18.122537 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:913c5d0dc1167284077996d7ece862d249c0ea281b64cebff359fb17ea21c1a0} | |
Feb 09 02:18:18 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:18.122509 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:747cc3d3bec48db4685b4af7511fafc7402b3323f902e0a6ea6742c3482a2a94} | |
Feb 09 02:18:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:17.802920 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:17.664045 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:18:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:17.664039 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:18:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:17.664024 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:15.115705 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:18:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:15.115354 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:18:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:15.114765 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerStarted Data:41f4ab9f6c17d9fcefa6948341c0472e2fdbc08db146145b879b66830a642955} | |
Feb 09 02:18:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:14.728102 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:14 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:14.640240 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:10.663548 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.663342 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:10.641467 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.641017 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.641011 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.641006 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.641001 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.640994 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:18:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:10.640978 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:18:08 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:08.640792 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:18:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:08.640597 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:18:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:07.644134 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:18:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:07.643991 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:18:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:04.716938 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:18:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:03.603996 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:18:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:02.660102 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:18:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:02.659783 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:18:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:02.659776 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:18:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:02.659758 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:18:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:18:00.655550 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:18:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:18:00.655347 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:58.663648 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663185 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663181 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663176 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663171 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663164 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:17:58 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:58.663149 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:17:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:56.671671 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:17:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:56.671470 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:17:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:54.706867 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:53.641472 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:17:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:53.641295 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:17:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:53.075504 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:17:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:53.075376 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:17:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:52.074588 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:17:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:52.074468 13938 scope.go:115] "RemoveContainer" containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" | |
Feb 09 02:17:52 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:52.074324 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerStarted Data:03e016d262da7021303bb0dc85e5eebd407a99a6a24f2158e13886c04253c62b} | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:51.174968 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=snapshot-controller pod=snapshot-controller-7d445c66c9-v9z66_kube-system(4d4fcb97-a65e-43e4-a2c8-ac710c48704f)\"" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070759 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070611 13938 scope.go:115] "RemoveContainer" containerID="ebed1c24fa68ebd3008e3dc3051b99794b4045be00ae8c5bd96219d695775e85" | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070600 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b9fe76ac87ed6a46b49d0ad25e2db57656d6c041e2bdfa07acfae0bf62a62960" | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070591 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerDied Data:b9fe76ac87ed6a46b49d0ad25e2db57656d6c041e2bdfa07acfae0bf62a62960} | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070574 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" event=&{ID:4d4fcb97-a65e-43e4-a2c8-ac710c48704f Type:ContainerDied Data:92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031} | |
Feb 09 02:17:51 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:51.070545 13938 generic.go:296] "Generic (PLEG): container finished" podID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f containerID="92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" exitCode=2 | |
Feb 09 02:17:50 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:50.640730 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/snapshot-controller-7d445c66c9-v9z66" podUID=4d4fcb97-a65e-43e4-a2c8-ac710c48704f containerName="snapshot-controller" containerID="containerd://92a816f0eb34146ef7b295c835257491a2a08276ad27b00348f837a493b26031" gracePeriod=30 | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:49.659471 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:49.659256 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:49.640498 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:49.640164 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:49.640157 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:17:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:49.640136 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:17:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:44.696155 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:43.663516 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663078 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663073 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663068 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663062 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663056 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:17:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:43.663039 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:17:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:41.671313 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:17:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:41.671130 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:17:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:41.641228 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:17:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:41.641028 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:17:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:38.673636 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:17:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:38.671824 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:17:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:38.671816 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:17:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:38.671797 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:17:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:34.688127 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:34.640802 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:17:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:34.640603 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:17:30 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:30.656681 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:17:30 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:30.656463 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:29.641309 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640750 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640744 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640737 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640731 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640723 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:17:29 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:29.640701 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:17:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:27.641021 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:17:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:27.640685 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:17:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:27.640678 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:17:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:27.640660 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:17:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:26.676459 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:17:26 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:26.676274 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:17:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:24.679033 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:21 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:21.640479 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:17:21 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:21.640282 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:17:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:17.667745 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:17:17 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:17.667547 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:17:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:16.944790 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeReady" | |
Feb 09 02:17:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:16.944780 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasSufficientPID" | |
Feb 09 02:17:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:16.944773 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasNoDiskPressure" | |
Feb 09 02:17:16 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:16.944748 13938 kubelet_node_status.go:563] "Recording event message for node" node="kubejetstream-k8s-node-1" event="NodeHasSufficientMemory" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:15.642176 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:15.641924 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641766 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641719 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641714 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641709 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641703 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641696 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:17:15 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:15.641679 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:17:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:14.668953 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:13.667969 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:17:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:13.667613 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:17:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:13.667607 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:17:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:13.667587 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:17:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:10.640762 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:17:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:10.640440 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:17:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:09.006729 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:17:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:09.006393 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:17:09 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:09.006198 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerStarted Data:439ab2bdfd37d11a1244a59fb4d5db14216810fc2f4d29964c17c7edb5e811ee} | |
Feb 09 02:17:08 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:08.640964 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.880037 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.880021 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.879830 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.879635 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.879438 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.879164 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:06.010673 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:17:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:05.915642 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:17:05 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:05.643401 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:17:05 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:05.643203 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:17:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:04.659544 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.642311 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.642132 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.641958 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.641619 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.641239 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:17:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:03.603382 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:02.641231 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:02.640862 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:02.640854 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:02.640839 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:02.640808 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:17:02 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:02.640649 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:17:00.567060 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566480 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566470 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566461 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566450 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566438 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566418 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:17:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:17:00.566000 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:59.672330 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671871 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671865 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671860 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671855 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671849 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:59.671830 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:16:59 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:59.009418 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.640532 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:56.640331 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598915 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598899 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598761 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598616 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598459 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:56.598243 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:55.914811 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:54.708000 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:16:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:54.707804 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:16:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:54.707614 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:16:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:54.651262 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:53.671281 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.671119 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.642472 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.642235 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.642041 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.641893 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:53.641733 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:52 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:52.008006 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:49.641074 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:49.640797 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:49.640701 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:49.640694 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:49.640680 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:16:49 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:49.640541 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:48.996520 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996060 13938 scope.go:115] "RemoveContainer" containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996054 13938 scope.go:115] "RemoveContainer" containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996047 13938 scope.go:115] "RemoveContainer" containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996041 13938 scope.go:115] "RemoveContainer" containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996034 13938 scope.go:115] "RemoveContainer" containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.996016 13938 scope.go:115] "RemoveContainer" containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.971696 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:48.971237 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerStarted Data:e043e53c3806ad1919fb1c9964a9700b9a90724d5cec4dfd8ba955a72da136dd} | |
Feb 09 02:16:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:48.073461 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-attacher pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-provisioner pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-snapshotter pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-resizer pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-controllerplugin-648ffdc6db-88b2v_kube-system(6519b71f-b245-4469-9e13-f3f9b40a0ab0)\"]" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 | |
Feb 09 02:16:47 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:47.995212 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:16:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:47.995025 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:16:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:47.967761 13938 status_manager.go:667] "Failed to get status for pod" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-controllerplugin-648ffdc6db-88b2v\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:47.967612 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" | |
Feb 09 02:16:47 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:47.006389 13938 scope.go:115] "RemoveContainer" containerID="321ae21e96b48ad5e5122ed998ddb8bcd19eef1bef61c51ff3f12da79aaf9776" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.998425 13938 scope.go:115] "RemoveContainer" containerID="ca2aa5da37d936a30baea0d25293b39565e70bf0cf8be1b3b78b1a855b1ec31e" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.987135 13938 scope.go:115] "RemoveContainer" containerID="e0218ab699869117a4768750f0181f7b763743011ae692d0d48a065834d8ed6c" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.979066 13938 scope.go:115] "RemoveContainer" containerID="80eadadbf94082ffcfa836e2710fb450f41b9eee5363fcfe6986565f9d8b7938" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.969709 13938 scope.go:115] "RemoveContainer" containerID="6e33a9cb7f772a75b826de52d49b2b40a47b0e6b2ccfdd69b2cacad7d05228b0" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.961748 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.961718 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.961495 13938 scope.go:115] "RemoveContainer" containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.961311 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerStarted Data:62a3657fe21fdfb3a601da20423144192e2d80e8b9b738620613b6d096ef19f9} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959800 13938 scope.go:115] "RemoveContainer" containerID="d16484306f99bfbd6f1227aaddc02c800ae8c9a2bb80f90fd36d206d8ad1863d" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959788 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0c133c1e02929236a00b4f2e360c2571b9f98ff4fe62fc32af9a442e3b5448df" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959780 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:0c133c1e02929236a00b4f2e360c2571b9f98ff4fe62fc32af9a442e3b5448df} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959769 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959760 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959751 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959742 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959731 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959700 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" event=&{ID:6519b71f-b245-4469-9e13-f3f9b40a0ab0 Type:ContainerDied Data:b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b} | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959676 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959669 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959661 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959653 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959645 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.959626 13938 generic.go:296] "Generic (PLEG): container finished" podID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerID="b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" exitCode=2 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.925768 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640630 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="liveness-probe" containerID="containerd://abf614a610a74a381bd01f2a6ff4b90962b8ed19db7df39618aa6cf6863dba04" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640618 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-provisioner" containerID="containerd://eedb3560456bb3a739fe8269fd152986e03f1a19581b17f16a6fdd65e4dc3648" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640595 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" containerID="containerd://dc0e508047921a469f8d13408eebf48a19427b5bc8a863be1594511c4bc32d84" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640563 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-snapshotter" containerID="containerd://785e579c3e565272a45944302db3da3ee1ce07fa60c820afc5a44e740c46c901" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640554 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-resizer" containerID="containerd://8b6969ceedc078a32727e147dd0d2f25bbc149205cde1e8e99b5715f068b0e9f" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:46.640500 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="csi-attacher" containerID="containerd://b78d4415c054bd5f0fb5fea01b398aabb7bd09909446bfd224525f948650bc6b" gracePeriod=30 | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.442268 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.442253 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.442066 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.441878 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.441673 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.441276 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:46.060360 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-588bb58b94-8jdjw_kube-system(4ccf085a-788e-4d6a-a9c8-e6b074b57323)\"" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953985 13938 status_manager.go:667] "Failed to get status for pod" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 pod="kube-system/coredns-588bb58b94-8jdjw" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/coredns-588bb58b94-8jdjw\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953705 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-588bb58b94-8jdjw" | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953506 13938 scope.go:115] "RemoveContainer" containerID="5a817d6b4b8a56cd7b5b98e28045a95700e99233d6bd7db34a1e42a8298ef034" | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953496 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bfea7b7dff5223fe85da5feac7128faa83192ab4e2a82f356ec6018d2b56cc4f" | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953488 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:bfea7b7dff5223fe85da5feac7128faa83192ab4e2a82f356ec6018d2b56cc4f} | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953471 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-588bb58b94-8jdjw" event=&{ID:4ccf085a-788e-4d6a-a9c8-e6b074b57323 Type:ContainerDied Data:36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971} | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:45.953443 13938 generic.go:296] "Generic (PLEG): container finished" podID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerID="36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" exitCode=0 | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:45.913886 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:45.007288 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:44.641746 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:44.640435 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:16:44 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:44.640240 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:16:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:43.641054 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:43.640858 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:43.640617 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:41.679607 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:16:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:41.679455 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:16:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:40.640204 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/coredns-588bb58b94-8jdjw" podUID=4ccf085a-788e-4d6a-a9c8-e6b074b57323 containerName="coredns" containerID="containerd://36e864c7dc7d19750335d9106279e15b38f9ebc94126aa1e76ba2974aeda3971" gracePeriod=30 | |
Feb 09 02:16:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:38.641312 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:16:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:38.640962 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:16:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:38.640949 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:16:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:38.640915 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:16:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:38.006079 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.312800 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.312785 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.312607 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.312429 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.312247 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:36 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:36.311909 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:35 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:35.913119 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:34.932536 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:16:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:34.932384 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:16:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:34.627258 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.931456 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:33.931441 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.931299 13938 scope.go:115] "RemoveContainer" containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.931143 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerStarted Data:e977801c98ae5da316db2a32afb91ff40ce62f1c435f01f3f88012d096683a56} | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.641299 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.640995 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:33.640615 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:33.013637 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hq9nf_kube-system(58d10739-95b9-4783-bef2-7f61e6690f70)\"" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.929023 13938 status_manager.go:667] "Failed to get status for pod" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 pod="kube-system/kube-proxy-hq9nf" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hq9nf\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928771 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-proxy-hq9nf" | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928631 13938 scope.go:115] "RemoveContainer" containerID="34715933eebda63513d5c4b7350ad3a4dfe92001abbaa005b20cda283d95f843" | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928621 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f76a3fe718a0cd8d1fa06ff61713794cd1f4b5a48e871ba3f3969f6c6f3124df" | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928613 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:f76a3fe718a0cd8d1fa06ff61713794cd1f4b5a48e871ba3f3969f6c6f3124df} | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928596 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-proxy-hq9nf" event=&{ID:58d10739-95b9-4783-bef2-7f61e6690f70 Type:ContainerDied Data:0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245} | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.928570 13938 generic.go:296] "Generic (PLEG): container finished" podID=58d10739-95b9-4783-bef2-7f61e6690f70 containerID="0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" exitCode=2 | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:32.641198 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.640966 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:16:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:32.640795 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/kube-proxy-hq9nf" podUID=58d10739-95b9-4783-bef2-7f61e6690f70 containerName="kube-proxy" containerID="containerd://0f6b0b4cf55ee1b6c921101e45700343265e200922721fcd1a34910d5a655245" gracePeriod=30 | |
Feb 09 02:16:31 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:31.005061 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:27 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:27.671499 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:16:27 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:27.671356 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.302103 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.302083 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.301898 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.301685 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.301427 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:26 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:26.301187 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:25 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:25.912304 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:24.613272 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:24 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:24.003703 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:23.672081 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:23.671757 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:23.671751 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:23.671734 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:23.641142 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:23 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:23.640872 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:20 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:20.640539 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:16:20 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:20.640332 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:16:17 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:17.003254 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.276163 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.276145 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.275979 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.275743 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.275540 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:16 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:16.275337 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:15 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:15.910987 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:14 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:14.598332 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:13.675857 13938 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2332", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-proxy-kubejetstream-k8s-node-1", UID:"fff05cf554f139c875ab310b098fe537", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{nginx-proxy}"}, Reason:"BackOff", Message:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"kubejetstream-k8s-node-1"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 7, 31, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 16, 13, 675511568, time.Local), Count:22, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://localhost:6443/api/v1/namespaces/kube-system/events/nginx-proxy-kubejetstream-k8s-node-1.17b20faec059a6d4": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping) | |
Feb 09 02:16:13 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:13.675540 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:16:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:13.675380 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:16:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:13.641600 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:13 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:13.641307 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:10.656214 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:16:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:10.655903 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:16:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:10.655897 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:16:10 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:10.655879 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:16:10 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:10.002254 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:07 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:07.643469 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:16:07 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:07.643259 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.133487 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.133472 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.133290 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.133132 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.132960 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:06 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:06.132710 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:04 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:04.583230 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:16:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:03.640891 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:03.640634 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:16:03 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:03.603105 13938 kubelet_getters.go:182] "Pod status updated" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" status=Running | |
Feb 09 02:16:03 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:03.001337 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:16:01 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:01.567968 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/csi-cinder-controllerplugin-648ffdc6db-88b2v" podUID=6519b71f-b245-4469-9e13-f3f9b40a0ab0 containerName="cinder-csi-plugin" probeResult=failure output="HTTP probe failed with statuscode: 500" | |
Feb 09 02:16:00 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:16:00.667377 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:16:00 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:16:00.667221 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.870306 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:56.870018 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:56.870012 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:56.869997 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.016333 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.016302 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.016103 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.015902 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.015661 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:56.015442 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:56 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:55.999973 13938 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:55.868390 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:55.868265 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:55.868093 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:55.868087 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:55.868073 13938 scope.go:115] "RemoveContainer" containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" | |
Feb 09 02:15:55 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:55.867909 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:486e18f4fa4991ec5f4e6fc9eac8718cb9661f9f8bb3d8284d7ad1db32536164} | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.926699 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.865231 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.865049 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/csi-cinder-nodeplugin-tccts" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.864904 13938 scope.go:115] "RemoveContainer" containerID="5388a4fef90cbe11f164e0aff2838359699c9b25f0960bc5335d82c7b940e6f4" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.864895 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.864887 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca} | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.864873 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerDied Data:dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572} | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.864847 13938 generic.go:296] "Generic (PLEG): container finished" podID=a21982eb-0681-4eb6-822e-4123e7074f2a containerID="dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" exitCode=2 | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.799640 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CreateContainerError: \"failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CreateContainerError: \"failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.799599 13938 kuberuntime_manager.go:862] container &Container{Name:cinder-csi-plugin,Image:docker.io/k8scloudprovider/cinder-csi-plugin:v1.22.0,Command:[],Args:[/bin/cinder-csi-plugin --endpoint=$(CSI_ENDPOINT) --cloud-config=$(CLOUD_CONFIG)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:healthz,HostPort:9808,ContainerPort:9808,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CSI_ENDPOINT,Value:unix://csi/csi.sock,ValueFrom:nil,},EnvVar{Name:CLOUD_CONFIG,Value:/etc/config/cloud.conf,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:pods-probe-dir,ReadOnly:false,MountPath:/dev,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,},VolumeMount{Name:secret-cinderplugin,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jnqbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a): CreateContainerError: failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.799515 13938 remote_runtime.go:442] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = NotFound desc = failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found" podSandboxID="726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.798187 13938 kuberuntime_manager.go:862] container &Container{Name:liveness-probe,Image:registry.k8s.io/sig-storage/livenessprobe:v2.5.0,Command:[],Args:[--csi-address=/csi/csi.sock],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jnqbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a): CreateContainerError: failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.798111 13938 remote_runtime.go:442] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = NotFound desc = failed to get sandbox container task: no running task found: task 726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca not found: not found" podSandboxID="726eac492e6dd84de4ebf402ff24cb28be355dbc662efee74691db0a1dccbaca" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.796647 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.796631 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:54.640601 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a containerName="node-driver-registrar" containerID="containerd://dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572" gracePeriod=30 | |
Feb 09 02:15:54 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:54.568632 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:15:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:53.641046 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:53 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:53.641024 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:53.640811 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:53 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:53.640779 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:15:49 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:49.598814 13938 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:48 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:48.770541 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:15:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:48.770392 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:15:48 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:48.770195 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:15:46 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:46.397959 13938 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.671408 13938 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.671393 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.671236 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.671051 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.670841 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:45 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:45.670564 13938 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubejetstream-k8s-node-1\": Get \"https://localhost:6443/api/v1/nodes/kubejetstream-k8s-node-1?resourceVersion=0&timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:44.797430 13938 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:44 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:44.554151 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.996516 13938 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:43.641408 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:43.641212 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.596131 13938 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.394838 13938 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:43.394639 13938 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.394615 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.394404 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.394193 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.393971 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:43 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:43.393641 13938 controller.go:187] failed to update lease, error: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubejetstream-k8s-node-1?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused | |
Feb 09 02:15:42 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:42.841716 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:15:42 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:42.841495 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:15:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:41.839787 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:41 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:41.839764 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:15:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:41.839627 13938 scope.go:115] "RemoveContainer" containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" | |
Feb 09 02:15:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:41.839462 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerStarted Data:b59ed63fc2f1df578bcfd906b4a34290a525300531c9ff33cbcd0c40453ef06a} | |
Feb 09 02:15:41 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:41.525700 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:40.927339 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nginx-proxy pod=nginx-proxy-kubejetstream-k8s-node-1_kube-system(fff05cf554f139c875ab310b098fe537)\"" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.859036 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.837338 13938 status_manager.go:667] "Failed to get status for pod" podUID=fff05cf554f139c875ab310b098fe537 pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-kubejetstream-k8s-node-1\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.837070 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="74a50e3ad364b0c4de0269c38d4c9fdf1153ac60ea1c80983deff71d318c5e30" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.837051 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:74a50e3ad364b0c4de0269c38d4c9fdf1153ac60ea1c80983deff71d318c5e30} | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:40.836511 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.836417 13938 status_manager.go:667] "Failed to get status for pod" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a pod="kube-system/csi-cinder-nodeplugin-tccts" err="Get \"https://localhost:6443/api/v1/namespaces/kube-system/pods/csi-cinder-nodeplugin-tccts\": dial tcp 127.0.0.1:6443: connect: connection refused" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.836272 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.836262 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.835992 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/csi-cinder-nodeplugin-tccts" event=&{ID:a21982eb-0681-4eb6-822e-4123e7074f2a Type:ContainerStarted Data:dacac4ada10564efab199f61aec8396e105793753288fbf4002b7019f5099572} | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:40.728641 13938 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"liveness-probe\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=liveness-probe pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\", failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-csi-plugin pod=csi-cinder-nodeplugin-tccts_kube-system(a21982eb-0681-4eb6-822e-4123e7074f2a)\"]" pod="kube-system/csi-cinder-nodeplugin-tccts" podUID=a21982eb-0681-4eb6-822e-4123e7074f2a | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.668102 13938 scope.go:115] "RemoveContainer" containerID="3a1c6a499ea925c3860d1772a9b9e6bc3b873025eccb2bacf27570e51ed711af" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.668095 13938 scope.go:115] "RemoveContainer" containerID="1a4e71ecd255bba57142b3f53a714b0ac4fbafc195c8eae33dda042e2287561e" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.668078 13938 scope.go:115] "RemoveContainer" containerID="5388a4fef90cbe11f164e0aff2838359699c9b25f0960bc5335d82c7b940e6f4" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:40.252934 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.252714 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:15:40 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:40.252483 13938 kubelet.go:2206] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.832716 13938 scope.go:115] "RemoveContainer" containerID="cc4e76a866ff267cbb6e2c9d88d3248dbcfa9e808d9f9ac1bc010d65d12b11f0" | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.832692 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" event=&{ID:fff05cf554f139c875ab310b098fe537 Type:ContainerDied Data:3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd} | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.832661 13938 generic.go:296] "Generic (PLEG): container finished" podID=fff05cf554f139c875ab310b098fe537 containerID="3af651121dd4a8220cfacfe41f390d37e8bd1011c0f3cb8855c2455efbde08dd" exitCode=0 | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647532 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647497 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647257 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647412 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647391 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647349 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647269 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647254 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647214 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647159 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:39 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:39.647135 13938 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF | |
Feb 09 02:15:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:38.770423 13938 prober.go:114] "Probe failed" probeType="Readiness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" probeResult=failure output="Get \"http://10.0.74.64:8081/healthz\": dial tcp 10.0.74.64:8081: connect: connection refused" | |
Feb 09 02:15:38 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:38.306900 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:38.306692 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:15:38 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:38.306457 13938 kubelet.go:2206] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:15:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:34.823474 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:34 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:34.823276 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:15:34 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:34.538059 13938 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kube.slice/containerd.service\": failed to get container info for \"/kube.slice/containerd.service\": unknown container \"/kube.slice/containerd.service\"" containerName="/kube.slice/containerd.service" | |
Feb 09 02:15:33 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:33.822164 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:33.821972 13938 scope.go:115] "RemoveContainer" containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" | |
Feb 09 02:15:33 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:33.821804 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerStarted Data:74c879a4e2b8c91473112482c08ff2c0f226a0c70900f5047fa93446c0658519} | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: E0209 02:15:32.878885 13938 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-cache\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=node-cache pod=nodelocaldns-kn4r6_kube-system(c62fa1bd-8aeb-471c-8b6e-3e6847025b23)\"" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818770 13938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/nodelocaldns-kn4r6" | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818620 13938 scope.go:115] "RemoveContainer" containerID="5dbdc709579e8fa2a156ffed1cc3ee1e11632926daf2fd4a79447e0c5dd473b4" | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818610 13938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="02416e96fed5ba23f1d346ed0976c6fc069de2a089b453af0c3f7f3bf285a35d" | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818602 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:02416e96fed5ba23f1d346ed0976c6fc069de2a089b453af0c3f7f3bf285a35d} | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818588 13938 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/nodelocaldns-kn4r6" event=&{ID:c62fa1bd-8aeb-471c-8b6e-3e6847025b23 Type:ContainerDied Data:4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321} | |
Feb 09 02:15:32 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:32.818563 13938 generic.go:296] "Generic (PLEG): container finished" podID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerID="4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" exitCode=0 | |
Feb 09 02:15:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:31.641987 13938 kuberuntime_container.go:702] "Killing container with a grace period" pod="kube-system/nodelocaldns-kn4r6" podUID=c62fa1bd-8aeb-471c-8b6e-3e6847025b23 containerName="node-cache" containerID="containerd://4f1917759e2e76adc89bfe9c4c74ad6d4cbcc4ceafbef871a6fef03fd4025321" gracePeriod=2 | |
Feb 09 02:15:31 kubejetstream-k8s-node-1 kubelet[13938]: I0209 02:15:31.525302 13938 prober.go:114] "Probe failed" probeType="Liveness" pod="kube-system/nginx-proxy-kubejetstream-k8s-node-1" podUID=fff05cf554f139c875ab310b098fe537 containerName="nginx-proxy" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment