Skip to content

Instantly share code, notes, and snippets.

@yoplait
Created September 11, 2023 16:29
Show Gist options
  • Save yoplait/8c8aadf2143fa6d6775dabc6a98aa2ab to your computer and use it in GitHub Desktop.
Save yoplait/8c8aadf2143fa6d6775dabc6a98aa2ab to your computer and use it in GitHub Desktop.
[vagrant@ip-192-168-17-150 log]$ sudo journalctl -u kubelet
-- Logs begin at Mon 2023-09-11 14:21:25 UTC, end at Mon 2023-09-11 14:38:58 UTC. --
Sep 11 14:23:07 ip-192-168-17-150.ec2.internal systemd[1]: Starting Kubelet...
Sep 11 14:23:07 ip-192-168-17-150.ec2.internal systemd[1]: Started Kubelet.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.796116 1081 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: W0911 14:23:11.797411 1081 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: W0911 14:23:11.797553 1081 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.904007 1081 server.go:415] "Kubelet version" kubeletVersion="v1.27.4"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.904042 1081 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: W0911 14:23:11.904078 1081 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: W0911 14:23:11.904139 1081 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.905284 1081 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.908869 1081 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.908944 1081 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgro>
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.908963 1081 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.908978 1081 container_manager_linux.go:302] "Creating device plugin manager"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.909155 1081 state_mem.go:36] "Initialized new in-memory state store"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.917931 1081 kubelet.go:405] "Attempting to sync node with API server"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.917953 1081 kubelet.go:309] "Adding apiserver pod source"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.917985 1081 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.919005 1081 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.9" apiVersion="v1"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: W0911 14:23:11.920734 1081 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.921242 1081 server.go:1168] "Started kubelet"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.922489 1081 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.923733 1081 server.go:461] "Adding debug handlers to kubelet server"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.924882 1081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.925723 1081 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: E0911 14:23:11.926475 1081 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/co>
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: E0911 14:23:11.926506 1081 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.929304 1081 volume_manager.go:284] "Starting Kubelet Volume Manager"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:11.929410 1081 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Sep 11 14:23:11 ip-192-168-17-150.ec2.internal kubelet[1081]: E0911 14:23:11.929582 1081 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-192-168-17-150.ec2.internal\" not found"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.022039 1081 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.022070 1081 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.022092 1081 state_mem.go:36] "Initialized new in-memory state store"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.024924 1081 policy_none.go:49] "None policy: Start"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.025446 1081 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.025472 1081 state_mem.go:35] "Initializing new in-memory state store"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.030767 1081 kubelet_node_status.go:70] "Attempting to register node" node="ip-192-168-17-150.ec2.internal"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.122785 1081 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.123301 1081 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: E0911 14:23:12.124238 1081 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-192-168-17-150.ec2.internal\" not foun>
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.769251 1081 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.773417 1081 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.773442 1081 status_manager.go:207] "Starting to sync pod status with apiserver"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:12.773466 1081 kubelet.go:2257] "Starting kubelet main sync loop"
Sep 11 14:23:12 ip-192-168-17-150.ec2.internal kubelet[1081]: E0911 14:23:12.773569 1081 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.156876 1081 kubelet_node_status.go:73] "Successfully registered node" node="ip-192-168-17-150.ec2.internal"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.919915 1081 apiserver.go:52] "Watching apiserver"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.922907 1081 topology_manager.go:212] "Topology Admit Handler"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.922995 1081 topology_manager.go:212] "Topology Admit Handler"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.929919 1081 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941628 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/host-path/07c33>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941677 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlog\" (UniqueName: \"kubernetes.io/host-path/6b74ad>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941722 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941758 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b74ad>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941791 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qzr\" (UniqueName: \"kubernetes.io/p>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941823 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941858 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941885 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/07c33>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941918 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/configmap/6b>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.941979 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.942010 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw8dg\" (UniqueName: \"kubernetes.io/p>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.942040 1081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/>
Sep 11 14:23:14 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:14.942055 1081 reconciler.go:41] "Reconciler: start to sync state"
Sep 11 14:23:21 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:21.805689 1081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l67j4" podStartSLOduration=4.329325293 podCreationTimes>
Sep 11 14:23:25 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:25.820567 1081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/aws-node-nklld" podStartSLOduration=3.69968095 podCreationTimestam>
Sep 11 14:23:26 ip-192-168-17-150.ec2.internal kubelet[1081]: I0911 14:23:26.730739 1081 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
[vagrant@ip-192-168-17-150 log]$
[vagrant@ip-192-168-17-150 log]$ sudo journalctl -u containerd
-- Logs begin at Mon 2023-09-11 14:21:25 UTC, end at Mon 2023-09-11 14:39:17 UTC. --
Sep 11 14:22:54 ip-192-168-17-150.ec2.internal systemd[1]: Starting containerd container runtime...
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://>
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.936595534Z" level=info msg="starting containerd" revision=1c90a442489720eec95342e1789ee8a5e1b9536f version=1.6.9
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.948700804Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.953609117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.958293344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status>
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.958326204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.958345052Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.958359876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:00 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:00.958482755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.019179544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.019420287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs>
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.019445983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.020696848Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.020716804Z" level=info msg="metadata content store policy set" policy=shared
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.089827361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.089876263Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.089893259Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091049266Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091077277Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091105692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091128492Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091158409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091178389Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091215040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091234197Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091252024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091551567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.091684688Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.093153597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.093206096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.093225419Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094119174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094153284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094172416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094191007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094208966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094228110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094246558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094264881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094283889Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094438124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094458331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094475775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094492574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094514632Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.co>
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094534463Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094566227Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094606748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094783498Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined>
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094800094Z" level=warning msg="`mirrors` is deprecated, please use `config_path` instead"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094877941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime>
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.094958162Z" level=info msg="Connect containerd service"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.095001320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.096725453Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load fa>
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098160573Z" level=info msg="Start subscribing containerd event"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098220679Z" level=info msg="Start recovering state"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098305484Z" level=info msg="Start event monitor"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098332632Z" level=info msg="Start snapshots syncer"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098346971Z" level=info msg="Start cni network conf syncer for default"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098358901Z" level=info msg="Start streaming server"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098511915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.098614593Z" level=info msg=serving... address=/run/containerd/containerd.sock
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:01.099110747Z" level=info msg="containerd successfully booted in 0.303451s"
Sep 11 14:23:01 ip-192-168-17-150.ec2.internal systemd[1]: Started containerd container runtime.
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal systemd[1]: Stopping containerd container runtime...
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[687]: time="2023-09-11T14:23:06.590022409Z" level=info msg="Stop CRI service"
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal systemd[1]: containerd.service: Succeeded.
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal systemd[1]: Stopped containerd container runtime.
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal systemd[1]: Starting containerd container runtime...
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https:/>
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.640048344Z" level=info msg="starting containerd" revision=1c90a442489720eec95342e1789ee8a5e1b9536f version=1.6.9
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.650298109Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.650341046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.653651725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit statu>
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.653680054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.653698367Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.653714088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.653746759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep 11 14:23:06 ip-192-168-17-150.ec2.internal containerd[1008]: time="2023-09-11T14:23:06.654004250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
[vagrant@ip-192-168-17-150 log]$
@yoplait
Copy link
Author

yoplait commented Sep 13, 2023

Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information f>
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.388202 1083 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also >
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: W0913 21:09:25.389593 1083 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: W0913 21:09:25.389757 1083 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.478243 1083 server.go:415] "Kubelet version" kubeletVersion="v1.27.4"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.478475 1083 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: W0913 21:09:25.478525 1083 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: W0913 21:09:25.478592 1083 feature_gate.go:241] Setting GA feature gate KubeletCredentialProviders=true. It will be removed in a future release.
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.488273 1083 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.488366 1083 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: >
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.488940 1083 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="c>
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.488970 1083 container_manager_linux.go:302] "Creating device plugin manager"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.489187 1083 state_mem.go:36] "Initialized new in-memory state store"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.493317 1083 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.513307 1083 kubelet.go:405] "Attempting to sync node with API server"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.513340 1083 kubelet.go:309] "Adding apiserver pod source"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.513372 1083 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.515847 1083 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.22" apiVersion="v1"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: W0913 21:09:25.644988 1083 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreat>
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.646549 1083 server.go:1168] "Started kubelet"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.646740 1083 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.646929 1083 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.647486 1083 server.go:461] "Adding debug handlers to kubelet server"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:25.649592 1083 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory >
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:25.649625 1083 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid cap>
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.650168 1083 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.651731 1083 volume_manager.go:284] "Starting Kubelet Volume Manager"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.651839 1083 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.754715 1083 kubelet_node_status.go:70] "Attempting to register node" node="ip-192-168-2-152.ec2.internal"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.759211 1083 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.759232 1083 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.759255 1083 state_mem.go:36] "Initialized new in-memory state store"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.769906 1083 policy_none.go:49] "None policy: Start"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.770602 1083 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.770639 1083 state_mem.go:35] "Initializing new in-memory state store"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.923216 1083 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not fou>
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:25.923824 1083 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep 13 21:09:25 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:25.924801 1083 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node "ip-192-168->
Sep 13 21:09:26 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:26.175731 1083 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Sep 13 21:09:26 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:26.180957 1083 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Sep 13 21:09:26 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:26.181002 1083 status_manager.go:207] "Starting to sync pod status with apiserver"
Sep 13 21:09:26 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:26.181031 1083 kubelet.go:2257] "Starting kubelet main sync loop"
Sep 13 21:09:26 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:26.181152 1083 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Sep 13 21:09:27 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:27.801839 1083 kubelet_node_status.go:73] "Successfully registered node" node="ip-192-168-2-152.ec2.internal"
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.516985 1083 apiserver.go:52] "Watching apiserver"
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.520462 1083 topology_manager.go:212] "Topology Admit Handler"
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.520582 1083 topology_manager.go:212] "Topology Admit Handler"
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.553063 1083 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563555 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: ">
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563602 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueNa>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563629 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563654 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: ">
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563700 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-pstht" >
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563734 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueNam>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563771 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "log-dir" (UniqueName: >
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563804 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueNa>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563838 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-qmnnc" >
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563867 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueNam>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563896 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueNam>
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563925 1083 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume "run-dir" (UniqueName: >
Sep 13 21:09:28 ip-192-168-2-152.ec2.internal kubelet[1083]: I0913 21:09:28.563938 1083 reconciler.go:41] "Reconciler: start to sync state"
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.114096 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.215319 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.316135 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.416574 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.517202 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.617728 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.718118 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.818668 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:32 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:32.919207 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >
Sep 13 21:09:33 ip-192-168-2-152.ec2.internal kubelet[1083]: E0913 21:09:33.020192 1083 kubelet_node_status.go:458] "Error getting the current node from lister" err="node "ip-192-168-2-152.ec2.internal" not >

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment