Skip to content

Instantly share code, notes, and snippets.

@soapergem
Last active April 23, 2022 14:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save soapergem/4a67818bd1d5a9625ab3eb95d9fae5b4 to your computer and use it in GitHub Desktop.
Save soapergem/4a67818bd1d5a9625ab3eb95d9fae5b4 to your computer and use it in GitHub Desktop.
Additional K8S logs
Name: coredns-64897985d-gfrpw
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-master-1/192.168.1.194
Start Time: Sat, 23 Apr 2022 09:36:23 -0500
Labels: k8s-app=kube-dns
pod-template-hash=64897985d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-64897985d
Containers:
coredns:
Container ID:
Image: k8s.gcr.io/coredns/coredns:v1.8.6
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9stj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-m9stj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m1s default-scheduler Successfully assigned kube-system/coredns-64897985d-gfrpw to k8s-master-1
Warning FailedCreatePodSandBox 2m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "982f93e6754575a5be6311da2d4a6680896acb815cbdb2db2623b53826720f51": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 106s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "cf255068b33fdc6bb1643c02c88ad415ae8b15fa4bb7e904cc6dcc57a70c0d25": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 94s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "177a3844a82497293323308b7ced3674d521d3e1787ce30b86bfae3ff1772540": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 81s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d67c6a65837d95cefed16f01432e58e60abe667ea16fb217bcef53ad508a8c2d": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 69s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "80627ccc301a1852e81726f680e9dcf5054b3a2c9629f966ce1d6948e9c782b3": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 55s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2592efed6af9e7c13a5df393b0da9a9e69f0ce56696b256ca666ab2cfc839f7a": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 42s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "10a1d4102f510c10577350bb9f9c0e2f75c32ea1ba565f485da4fe2e460bd5a2": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 28s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3c47210f74acfb621097ea96956103015712af14d37236a4e9fb60bb20d36b13": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 15s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a9a28913b8a6fce0f7d70cddf43b3e3ec2ee685416efeb5568b82b9410a64618": failed to find plugin "loopback" in path [/usr/lib/cni]
Warning FailedCreatePodSandBox 0s kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a1ad6c8a0a3a0e20c85540029a4d0124727ba6f8f2bf47cdfa4f30f30d54c724": failed to find plugin "loopback" in path [/usr/lib/cni]
Name: kube-controller-manager-k8s-master-1
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8s-master-1/192.168.1.194
Start Time: Sat, 23 Apr 2022 09:35:30 -0500
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 35cc09b25a36008378983acafe55cd47
kubernetes.io/config.mirror: 35cc09b25a36008378983acafe55cd47
kubernetes.io/config.seen: 2022-04-23T09:35:30.086660134-05:00
kubernetes.io/config.source: file
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 192.168.1.194
IPs:
IP: 192.168.1.194
Controlled By: Node/k8s-master-1
Containers:
kube-controller-manager:
Container ID: containerd://667663891a0e7cf59f33220d9150a663e805bfa8163b8ff34fda902966d439f3
Image: k8s.gcr.io/kube-controller-manager:v1.23.6
Image ID: k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf
Port: <none>
Host Port: <none>
Command:
kube-controller-manager
--allocate-node-cidrs=true
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
--bind-address=127.0.0.1
--client-ca-file=/etc/kubernetes/pki/ca.crt
--cluster-cidr=10.244.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--use-service-account-credentials=true
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sat, 23 Apr 2022 09:35:34 -0500
Finished: Sat, 23 Apr 2022 09:38:32 -0500
Ready: False
Restart Count: 27
Requests:
cpu: 200m
Liveness: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 3m21s kubelet Container image "k8s.gcr.io/kube-controller-manager:v1.23.6" already present on machine
Normal Created 3m20s kubelet Created container kube-controller-manager
Normal Started 3m19s kubelet Started container kube-controller-manager
Normal Killing 21s (x2 over 3m23s) kubelet Stopping container kube-controller-manager
Normal SandboxChanged 20s (x2 over 3m21s) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 11s (x5 over 20s) kubelet Back-off restarting failed container
I0423 02:18:19.050420 1 node.go:163] Successfully retrieved node IP: 192.168.1.194
I0423 02:18:19.050676 1 server_others.go:138] "Detected node IP" address="192.168.1.194"
I0423 02:18:19.050799 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0423 02:18:19.250379 1 server_others.go:206] "Using iptables Proxier"
I0423 02:18:19.250452 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0423 02:18:19.250483 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0423 02:18:19.250524 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0423 02:18:19.254781 1 server.go:656] "Version info" version="v1.23.6"
W0423 02:18:19.256226 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
I0423 02:18:19.261562 1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I0423 02:18:19.263920 1 config.go:226] "Starting endpoint slice config controller"
I0423 02:18:19.263969 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0423 02:18:19.265072 1 config.go:317] "Starting service config controller"
I0423 02:18:19.265146 1 shared_informer.go:240] Waiting for caches to sync for service config
I0423 02:18:19.364489 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0423 02:18:19.365700 1 shared_informer.go:247] Caches are synced for service config
Name: k8s-master-1
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=k8s-master-1
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"02:78:30:35:39:62"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.1.194
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 22 Apr 2022 16:23:53 -0500
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: k8s-master-1
AcquireTime: <unset>
RenewTime: Fri, 22 Apr 2022 16:24:46 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 22 Apr 2022 16:24:15 -0500 Fri, 22 Apr 2022 16:23:53 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 22 Apr 2022 16:24:15 -0500 Fri, 22 Apr 2022 16:23:53 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 22 Apr 2022 16:24:15 -0500 Fri, 22 Apr 2022 16:23:53 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 22 Apr 2022 16:24:15 -0500 Fri, 22 Apr 2022 16:24:15 -0500 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.1.194
Hostname: k8s-master-1
Capacity:
cpu: 4
ephemeral-storage: 30388284Ki
memory: 3886096Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 28005842489
memory: 3783696Ki
pods: 110
System Info:
Machine ID: 0d9149bb47034e0da1fa33aed72b2f07
System UUID: 0d9149bb47034e0da1fa33aed72b2f07
Boot ID: d77d8d2e-355b-4a13-99fc-0e0ff0c753f9
Kernel Version: 5.10.92-v8+
OS Image: Debian GNU/Linux 11 (bullseye)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.4.12
Kubelet Version: v1.23.4
Kube-Proxy Version: v1.23.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-k8s-master-1 100m (2%) 0 (0%) 100Mi (2%) 0 (0%) 37s
kube-system kube-apiserver-k8s-master-1 250m (6%) 0 (0%) 0 (0%) 0 (0%) 61s
kube-system kube-controller-manager-k8s-master-1 200m (5%) 0 (0%) 0 (0%) 0 (0%) 33s
kube-system kube-flannel-ds-nqlcd 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 7s
kube-system kube-scheduler-k8s-master-1 100m (2%) 0 (0%) 0 (0%) 0 (0%) 40s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (18%) 100m (2%)
memory 150Mi (4%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 91s (x8 over 92s) kubelet Node k8s-master-1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 91s (x7 over 92s) kubelet Node k8s-master-1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 91s (x7 over 92s) kubelet Node k8s-master-1 status is now: NodeHasSufficientPID
Normal Starting 55s kubelet Starting kubelet.
Warning InvalidDiskCapacity 55s kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 54s (x8 over 55s) kubelet Node k8s-master-1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 54s (x7 over 55s) kubelet Node k8s-master-1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 54s (x7 over 55s) kubelet Node k8s-master-1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 54s kubelet Updated Node Allocatable limit across pods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment