Skip to content

Instantly share code, notes, and snippets.

@withinboredom
Created February 4, 2022 05:34
Show Gist options
  • Save withinboredom/f09501f76a342d336d856a2ed9fa1172 to your computer and use it in GitHub Desktop.
Save withinboredom/f09501f76a342d336d856a2ed9fa1172 to your computer and use it in GitHub Desktop.
Feb 04 06:30:10 capital systemd[1]: Starting Lightweight Kubernetes...
Feb 04 06:30:10 capital sh[3007]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Feb 04 06:30:10 capital sh[3008]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Starting k3s v1.22.6+k3s1 (3228d9cb)"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Managed etcd cluster not yet initialized"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Reconciling bootstrap data between datastore and disk"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Running kube-apiserver --advertise-address=192.168.100.3 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m0s --port=0 --profiling=false"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="To join node to cluster: k3s agent -s https://65.108.75.198:6443 -t ${NODE_TOKEN}"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Run: k3s kubectl"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="certificate CN=capital signed by CN=k3s-server-ca@1641734570: notBefore=2022-01-09 13:22:50 +0000 UTC notAfter=2023-02-04 05:30:10 +0000 UTC"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="certificate CN=system:node:capital,O=system:nodes signed by CN=k3s-client-ca@1641734570: notBefore=2022-01-09 13:22:50 +0000 UTC notAfter=2023-02-04 05:30:10 +0000 UTC"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Module overlay was already loaded"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Module nf_conntrack was already loaded"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Module br_netfilter was already loaded"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Module iptable_nat was already loaded"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Using private registry config file at /etc/rancher/k3s/registries.yaml"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Feb 04 06:30:10 capital k3s[3011]: time="2022-02-04T06:30:10+01:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=info msg="Containerd is now running"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=info msg="Adding member capital-83c092df=https://192.168.100.3:2380 to etcd cluster [delightful-dc9f4862=https://192.168.100.2:2380]"
Feb 04 06:30:11 capital k3s[3011]: time="2022-02-04T06:30:11+01:00" level=info msg="Starting etcd for cluster [delightful-dc9f4862=https://192.168.100.2:2380 capital-83c092df=https://192.168.100.3:2380]"
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.607+0100","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.100.3:2380"]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.607+0100","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.607+0100","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.3:2379"]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.607+0100","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.10","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":false,"name":"capital-83c092df","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://192.168.100.3:2380"],"advertise-client-urls":["https://192.168.100.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"capital-83c092df=https://192.168.100.3:2380,delightful-dc9f4862=https://192.168.100.2:2380","initial-cluster-state":"existing","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.624+0100","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"17.001347ms"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.641+0100","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"ee4a2ddc00ac1817","cluster-id":"5d2267386b95c0fc"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.641+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee4a2ddc00ac1817 switched to configuration voters=()"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.641+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee4a2ddc00ac1817 became follower at term 0"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.641+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ee4a2ddc00ac1817 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:11.647+0100","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.650+0100","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.653+0100","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.656+0100","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.656+0100","caller":"rafthttp/transport.go:286","msg":"added new remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825","remote-peer-urls":["https://192.168.100.2:2380"]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.656+0100","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.656+0100","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.656+0100","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.657+0100","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.659+0100","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.659+0100","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.659+0100","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825","remote-peer-urls":["https://192.168.100.2:2380"]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.660+0100","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"ee4a2ddc00ac1817","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.660+0100","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.660+0100","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.663+0100","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.663+0100","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.100.3:2380"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.663+0100","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.100.3:2380"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.663+0100","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ee4a2ddc00ac1817","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://192.168.100.3:2380"],"advertise-client-urls":["https://192.168.100.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.663+0100","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.664+0100","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.664+0100","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.665+0100","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.710+0100","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ee4a2ddc00ac1817","to":"689da5af1f9c0825","stream-type":"stream Message"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.710+0100","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.711+0100","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ee4a2ddc00ac1817","to":"689da5af1f9c0825","stream-type":"stream MsgApp v2"}
Feb 04 06:30:11 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:11.711+0100","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ee4a2ddc00ac1817","remote-peer-id":"689da5af1f9c0825"}
Feb 04 06:30:12 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:12.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee4a2ddc00ac1817 [term: 0] received a MsgHeartbeat message with higher term from 689da5af1f9c0825 [term: 44]"}
Feb 04 06:30:12 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:12.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee4a2ddc00ac1817 became follower at term 44"}
Feb 04 06:30:12 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:12.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ee4a2ddc00ac1817 elected leader 689da5af1f9c0825 at term 44"}
Feb 04 06:30:12 capital k3s[3011]: {"level":"info","ts":"2022-02-04T06:30:12.033+0100","caller":"rafthttp/http.go:257","msg":"receiving database snapshot","local-member-id":"ee4a2ddc00ac1817","remote-snapshot-sender-id":"689da5af1f9c0825","incoming-snapshot-index":11326588,"incoming-snapshot-message-size-bytes":119028,"incoming-snapshot-message-size":"119 kB"}
Feb 04 06:30:16 capital k3s[3011]: time="2022-02-04T06:30:16+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:30:16 capital k3s[3011]: time="2022-02-04T06:30:16+01:00" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
Feb 04 06:30:16 capital k3s[3011]: time="2022-02-04T06:30:16+01:00" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
Feb 04 06:30:17 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:17.036+0100","caller":"rafthttp/http.go:271","msg":"failed to save incoming database snapshot","local-member-id":"ee4a2ddc00ac1817","remote-snapshot-sender-id":"689da5af1f9c0825","incoming-snapshot-index":11326588,"error":"read tcp 192.168.100.3:2380->192.168.100.2:5198: i/o timeout"}
Feb 04 06:30:20 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:20.535+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00121e8c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Feb 04 06:30:20 capital k3s[3011]: time="2022-02-04T06:30:20+01:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Feb 04 06:30:21 capital k3s[3011]: time="2022-02-04T06:30:21+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:30:21 capital k3s[3011]: time="2022-02-04T06:30:21+01:00" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
Feb 04 06:30:21 capital k3s[3011]: time="2022-02-04T06:30:21+01:00" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
Feb 04 06:30:26 capital k3s[3011]: time="2022-02-04T06:30:26+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:30:26 capital k3s[3011]: time="2022-02-04T06:30:26+01:00" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
Feb 04 06:30:26 capital k3s[3011]: time="2022-02-04T06:30:26+01:00" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
Feb 04 06:30:26 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:26.660+0100","caller":"etcdserver/server.go:2050","msg":"failed to publish local member to cluster through raft","local-member-id":"ee4a2ddc00ac1817","local-member-attributes":"{Name:capital-83c092df ClientURLs:[https://192.168.100.3:2379]}","request-path":"/0/members/ee4a2ddc00ac1817/attributes","publish-timeout":"15s","error":"etcdserver: request timed out, possibly due to connection lost"}
Feb 04 06:30:31 capital k3s[3011]: time="2022-02-04T06:30:31+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:30:31 capital k3s[3011]: time="2022-02-04T06:30:31+01:00" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
Feb 04 06:30:31 capital k3s[3011]: time="2022-02-04T06:30:31+01:00" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
Feb 04 06:30:35 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:35.536+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00121e8c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Feb 04 06:30:35 capital k3s[3011]: time="2022-02-04T06:30:35+01:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Feb 04 06:30:35 capital k3s[3011]: {"level":"warn","ts":"2022-02-04T06:30:35.959+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00121e8c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
Feb 04 06:30:35 capital k3s[3011]: time="2022-02-04T06:30:35+01:00" level=error msg="Tunnel context canceled while waiting for connection"
Feb 04 06:30:35 capital systemd[1]: k3s.service: Deactivated successfully.
Feb 04 06:30:35 capital systemd[1]: Stopped Lightweight Kubernetes.
Feb 04 06:30:35 capital systemd[1]: k3s.service: Consumed 26.613s CPU time.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1485 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1657 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1720 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1871 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 2014 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 2127 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 3571 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 6233 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 6262 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 194823 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195011 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195285 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195467 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195551 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195759 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 196482 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197143 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197603 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197760 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197908 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 198214 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 198653 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 199441 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 199501 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 200668 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 200920 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 201104 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 201501 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 203439 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 206727 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 224118 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: Starting Lightweight Kubernetes...
Feb 04 06:29:24 delightful sh[859529]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Feb 04 06:29:24 delightful sh[859530]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1485 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1657 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1720 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 1871 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 2014 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 2127 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 3571 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 6233 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 6262 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 194823 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195011 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195285 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195467 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195551 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 195759 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 196482 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197143 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197603 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197760 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 197908 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 198214 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 198653 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 199441 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 199501 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 200668 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 200920 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 201104 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 201501 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 203439 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 206727 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful systemd[1]: k3s.service: Found left-over process 224118 (containerd-shim) in control group while starting unit. Ignoring.
Feb 04 06:29:24 delightful systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 04 06:29:24 delightful k3s[859533]: time="2022-02-04T06:29:24+01:00" level=info msg="Starting k3s v1.22.6+k3s1 (3228d9cb)"
Feb 04 06:29:24 delightful k3s[859533]: time="2022-02-04T06:29:24+01:00" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
Feb 04 06:29:24 delightful k3s[859533]: time="2022-02-04T06:29:24+01:00" level=info msg="Starting local etcd to reconcile with datastore"
Feb 04 06:29:24 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:24.809+0100","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
Feb 04 06:29:24 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:24.809+0100","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2399"]}
Feb 04 06:29:24 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:24.809+0100","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.10","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"default","data-dir":"/var/lib/rancher/k3s/server/db/tmp-etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/tmp-etcd/member","force-new-cluster":true,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
Feb 04 06:29:24 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:24.818+0100","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/tmp-etcd/member/snap/db","took":"8.963019ms"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.202+0100","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":11300115,"snapshot-size":"117 kB"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.202+0100","caller":"etcdserver/server.go:518","msg":"recovered v3 backend from snapshot","backend-size-bytes":77922304,"backend-size":"78 MB","backend-size-in-use-bytes":12779520,"backend-size-in-use":"13 MB"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","caller":"etcdserver/raft.go:556","msg":"forcing restart member","cluster-id":"5d2267386b95c0fc","local-member-id":"689da5af1f9c0825","commit-index":11326204}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 switched to configuration voters=(7538363522856257573)"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became follower at term 43"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 689da5af1f9c0825 [peers: [689da5af1f9c0825], term: 43, commit: 11326204, applied: 11300115, lastindex: 11326204, lastterm: 43]"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"5d2267386b95c0fc","local-member-id":"689da5af1f9c0825","recovered-remote-peer-id":"689da5af1f9c0825","recovered-remote-peer-urls":["https://192.168.100.2:2380"]}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.377+0100","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:29:25.380+0100","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.380+0100","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":9905018}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.388+0100","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":9906571}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.389+0100","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.390+0100","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"689da5af1f9c0825","local-server-version":"3.5.1","cluster-id":"5d2267386b95c0fc","cluster-version":"3.5"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.390+0100","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"689da5af1f9c0825","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.391+0100","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"689da5af1f9c0825","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[]}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.391+0100","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"127.0.0.1:2380"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.391+0100","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"127.0.0.1:2380"}
Feb 04 06:29:25 delightful k3s[859533]: time="2022-02-04T06:29:25+01:00" level=info msg="Reconciling bootstrap data between datastore and disk"
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 is starting a new election at term 43"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became pre-candidate at term 43"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 received MsgPreVoteResp from 689da5af1f9c0825 at term 43"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became candidate at term 44"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 received MsgVoteResp from 689da5af1f9c0825 at term 44"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became leader at term 44"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.878+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 689da5af1f9c0825 elected leader 689da5af1f9c0825 at term 44"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.893+0100","caller":"etcdserver/server.go:2029","msg":"published local member to cluster through raft","local-member-id":"689da5af1f9c0825","local-member-attributes":"{Name:default ClientURLs:[http://localhost:2379]}","request-path":"/0/members/689da5af1f9c0825/attributes","cluster-id":"5d2267386b95c0fc","publish-timeout":"15s"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.893+0100","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.894+0100","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2399"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.902+0100","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"default","data-dir":"/var/lib/rancher/k3s/server/db/tmp-etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.903+0100","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"689da5af1f9c0825","current-leader-member-id":"689da5af1f9c0825"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.919+0100","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.920+0100","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.920+0100","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"default","data-dir":"/var/lib/rancher/k3s/server/db/tmp-etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
Feb 04 06:29:25 delightful k3s[859533]: time="2022-02-04T06:29:25+01:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1641734570: notBefore=2022-01-09 13:22:50 +0000 UTC notAfter=2023-02-04 05:29:25 +0000 UTC"
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.989+0100","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.100.2:2380"]}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.989+0100","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Feb 04 06:29:25 delightful k3s[859533]: time="2022-02-04T06:29:25+01:00" level=info msg="Active TLS secret (ver=) (count 9): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.2:192.168.100.2 listener.cattle.io/cn-delightful:delightful listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=EE113EAE8ED10904AFD4BD16B4B6B109EA58EB86]"
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.990+0100","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.2:2379"]}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.990+0100","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.10","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"delightful-dc9f4862","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://192.168.100.2:2380"],"advertise-client-urls":["https://192.168.100.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
Feb 04 06:29:25 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:25.997+0100","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"7.5819ms"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.341+0100","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":11300115,"snapshot-size":"117 kB"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.341+0100","caller":"etcdserver/server.go:518","msg":"recovered v3 backend from snapshot","backend-size-bytes":77922304,"backend-size":"78 MB","backend-size-in-use-bytes":12779520,"backend-size-in-use":"13 MB"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.505+0100","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"5d2267386b95c0fc","local-member-id":"689da5af1f9c0825","commit-index":11326204}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.506+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 switched to configuration voters=(7538363522856257573)"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.506+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became follower at term 43"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.506+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 689da5af1f9c0825 [peers: [689da5af1f9c0825], term: 43, commit: 11326204, applied: 11300115, lastindex: 11326204, lastterm: 43]"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.506+0100","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"5d2267386b95c0fc","local-member-id":"689da5af1f9c0825","recovered-remote-peer-id":"689da5af1f9c0825","recovered-remote-peer-urls":["https://192.168.100.2:2380"]}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.506+0100","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:29:26.507+0100","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.508+0100","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":9905018}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.513+0100","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":9906571}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"689da5af1f9c0825","local-server-version":"3.5.1","cluster-id":"5d2267386b95c0fc","cluster-version":"3.5"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"689da5af1f9c0825","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.100.2:2380"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.100.2:2380"}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.515+0100","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"689da5af1f9c0825","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://192.168.100.2:2380"],"advertise-client-urls":["https://192.168.100.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.100.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Feb 04 06:29:26 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:26.516+0100","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Running kube-apiserver --advertise-address=192.168.100.2 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m0s --port=0 --profiling=false"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="To join node to cluster: k3s agent -s https://65.108.6.254:6443 -t ${NODE_TOKEN}"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Run: k3s kubectl"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="certificate CN=delightful signed by CN=k3s-server-ca@1641734570: notBefore=2022-01-09 13:22:50 +0000 UTC notAfter=2023-02-04 05:29:26 +0000 UTC"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="certificate CN=system:node:delightful,O=system:nodes signed by CN=k3s-client-ca@1641734570: notBefore=2022-01-09 13:22:50 +0000 UTC notAfter=2023-02-04 05:29:26 +0000 UTC"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Module overlay was already loaded"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Module nf_conntrack was already loaded"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Module br_netfilter was already loaded"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Module iptable_nat was already loaded"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Using private registry config file at /etc/rancher/k3s/registries.yaml"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Feb 04 06:29:26 delightful k3s[859533]: time="2022-02-04T06:29:26+01:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 is starting a new election at term 43"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became pre-candidate at term 43"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 received MsgPreVoteResp from 689da5af1f9c0825 at term 43"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became candidate at term 44"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 received MsgVoteResp from 689da5af1f9c0825 at term 44"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 became leader at term 44"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.007+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 689da5af1f9c0825 elected leader 689da5af1f9c0825 at term 44"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.008+0100","caller":"etcdserver/server.go:2029","msg":"published local member to cluster through raft","local-member-id":"689da5af1f9c0825","local-member-attributes":"{Name:delightful-dc9f4862 ClientURLs:[https://192.168.100.2:2379]}","request-path":"/0/members/689da5af1f9c0825/attributes","cluster-id":"5d2267386b95c0fc","publish-timeout":"15s"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.008+0100","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.008+0100","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.010+0100","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.100.2:2379"}
Feb 04 06:29:27 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:29:27.010+0100","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="etcd data store connection OK"
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Waiting for API server to become available"
Feb 04 06:29:27 delightful k3s[859533]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.015470 859533 server.go:581] external host was not specified, using 192.168.100.2
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.015706 859533 server.go:175] Version: v1.22.6+k3s1
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.018912 859533 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.020103 859533 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.020117 859533 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.021431 859533 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.021447 859533 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=warning msg="bootstrap key already exists"
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Reconciling etcd snapshot data in k3s-etcd-snapshots ConfigMap"
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.040531 859533 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.041696 859533 instance.go:278] Using reconciler: lease
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.103135 859533 rest.go:130] the default service ipfamily for this cluster is: IPv4
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.336611 859533 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.337738 859533 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.345420 859533 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.346526 859533 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.350298 859533 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.352190 859533 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.356171 859533 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.356179 859533 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.357143 859533 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.357150 859533 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.359716 859533 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.362022 859533 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.364581 859533 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.364588 859533 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.365884 859533 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.368358 859533 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.368369 859533 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.373853 859533 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Containerd is now running"
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Handling backend connection request [delightful]"
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --config=/etc/rancher/k3s/kubelet.yaml --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=delightful --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=192.168.100.2 --node-labels=topology.kubernetes.io/region=west-eu,topology.kubernetes.io/zone=HEL1-DC6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Feb 04 06:29:27 delightful k3s[859533]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Feb 04 06:29:27 delightful k3s[859533]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Feb 04 06:29:27 delightful k3s[859533]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Feb 04 06:29:27 delightful k3s[859533]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Feb 04 06:29:27 delightful k3s[859533]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Feb 04 06:29:27 delightful k3s[859533]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Feb 04 06:29:27 delightful k3s[859533]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Feb 04 06:29:27 delightful k3s[859533]: time="2022-02-04T06:29:27+01:00" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Feb 04 06:29:27 delightful systemd[1]: Started Kubernetes systemd probe.
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.650370 859533 server.go:436] "Kubelet version" kubeletVersion="v1.22.6+k3s1"
Feb 04 06:29:27 delightful k3s[859533]: W0204 06:29:27.661027 859533 manager.go:159] Cannot detect current cgroup on cgroup v2
Feb 04 06:29:27 delightful k3s[859533]: I0204 06:29:27.661045 859533 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
Feb 04 06:29:27 delightful systemd[1]: run-r53cfa19eeef64f18a55dec521fe606c2.scope: Deactivated successfully.
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.215552 859533 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.215559 859533 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.215677 859533 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.215949 859533 secure_serving.go:266] Serving securely on 127.0.0.1:6444
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.215998 859533 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216021 859533 apf_controller.go:312] Starting API Priority and Fairness config controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216050 859533 customresource_discovery_controller.go:209] Starting DiscoveryController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216066 859533 autoregister_controller.go:141] Starting autoregister controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216081 859533 cache.go:32] Waiting for caches to sync for autoregister controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216086 859533 available_controller.go:491] Starting AvailableConditionController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216097 859533 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216104 859533 crdregistration_controller.go:111] Starting crd-autoregister controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216114 859533 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216021 859533 controller.go:83] Starting OpenAPI AggregationController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216187 859533 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216217 859533 establishing_controller.go:76] Starting EstablishingController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216332 859533 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216346 859533 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216357 859533 crd_finalizer.go:266] Starting CRDFinalizer
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216718 859533 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216730 859533 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216798 859533 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.216856 859533 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.217009 859533 apiservice_controller.go:97] Starting APIServiceRegistrationController
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.217020 859533 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.217047 859533 controller.go:85] Starting OpenAPI controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.217066 859533 naming_controller.go:291] Starting NamingConditionController
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.267024 859533 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.316143 859533 shared_informer.go:247] Caches are synced for crd-autoregister
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.316151 859533 cache.go:39] Caches are synced for autoregister controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.316163 859533 apf_controller.go:317] Running API Priority and Fairness config worker
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.316144 859533 cache.go:39] Caches are synced for AvailableConditionController controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.316761 859533 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.317009 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.317482 859533 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Feb 04 06:29:28 delightful k3s[859533]: I0204 06:29:28.322061 859533 shared_informer.go:247] Caches are synced for node_authorizer
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.325913 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.336833 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.357400 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.398291 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.478684 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.639932 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:28 delightful k3s[859533]: E0204 06:29:28.960591 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.215949 859533 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.215973 859533 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.218373 859533 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
Feb 04 06:29:29 delightful k3s[859533]: E0204 06:29:29.601345 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:29 delightful k3s[859533]: time="2022-02-04T06:29:29+01:00" level=info msg="labels have already set on node: delightful"
Feb 04 06:29:29 delightful k3s[859533]: time="2022-02-04T06:29:29+01:00" level=info msg="Starting flannel with backend vxlan"
Feb 04 06:29:29 delightful k3s[859533]: time="2022-02-04T06:29:29+01:00" level=info msg="Flannel found PodCIDR assigned for node delightful"
Feb 04 06:29:29 delightful k3s[859533]: time="2022-02-04T06:29:29+01:00" level=info msg="The interface enp9s0.4000 with ipv4 address 192.168.100.2 will be used by flannel"
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.627048 859533 kube.go:120] Waiting 10m0s for node controller to sync
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.627066 859533 kube.go:378] Starting kube subnet manager
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.732173 859533 network_policy_controller.go:144] Starting network policy controller
Feb 04 06:29:29 delightful systemd[1]: Started Lightweight Kubernetes.
Feb 04 06:29:29 delightful k3s[859533]: I0204 06:29:29.815660 859533 network_policy_controller.go:154] Starting network policy controller full sync goroutine
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Kube API server is now running"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="k3s is up and running"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Waiting for cloud-controller-manager privileges to become available"
Feb 04 06:29:30 delightful k3s[859533]: W0204 06:29:30.084533 859533 handler_proxy.go:104] no RequestInfo found in the context
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.084578 859533 controller.go:116] loading OpenAPI spec for "v1.cluster.loft.sh" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Feb 04 06:29:30 delightful k3s[859533]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.084587 859533 controller.go:129] OpenAPI AggregationController: action for item v1.cluster.loft.sh: Rate Limited Requeue.
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Applying CRD addons.k3s.cattle.io"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.293519 859533 serving.go:354] Generated self-signed cert in-memory
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Applying CRD helmcharts.helm.cattle.io"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.318767 859533 serving.go:354] Generated self-signed cert in-memory
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.394869 859533 serving.go:354] Generated self-signed cert in-memory
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.9.100.tgz"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.9.100.tgz"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Reconciliation of snapshot data in k3s-etcd-snapshots ConfigMap complete"
Feb 04 06:29:30 delightful k3s[859533]: W0204 06:29:30.553431 859533 authorization.go:47] Authorization is disabled
Feb 04 06:29:30 delightful k3s[859533]: W0204 06:29:30.553440 859533 authentication.go:47] Authentication is disabled
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.553447 859533 deprecated_insecure_serving.go:54] Serving healthz insecurely on [::]:10251
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.554977 859533 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.554987 859533 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.554992 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.555005 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.554992 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.555015 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.555144 859533 secure_serving.go:200] Serving securely on 127.0.0.1:10259
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.555165 859533 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.628131 859533 kube.go:127] Node controller sync successful
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.628177 859533 vxlan.go:137] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.628414 859533 kube.go:345] Skip setting NodeNetworkUnavailable
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.645622 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Running flannel backend."
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.646534 859533 vxlan_network.go:60] watching for new subnet leases
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.649678 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Creating deploy event broadcaster"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"8058d992-4f89-4ba5-93bc-a75c73853b7f\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"279\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.655112 859533 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.655239 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.655588 859533 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.655769 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.656059 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.657172 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Starting /v1, Kind=Node controller"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.657945 859533 leaderelection.go:248] attempting to acquire leader lease kube-system/k3s...
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Cluster dns configmap already exists"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.660707 859533 leaderelection.go:258] successfully acquired lease kube-system/k3s
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Labels and annotations have been set successfully on node: delightful"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.670896 859533 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"8058d992-4f89-4ba5-93bc-a75c73853b7f\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"279\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.672686 859533 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"334303ad-1ce9-4280-878d-c198f03a19f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805869\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.699646 859533 controller.go:611] quota admission added evaluator for: deployments.apps
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"334303ad-1ce9-4280-878d-c198f03a19f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805869\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"2a3c9021-74d3-4a4a-9437-2c03e6e09113\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"298\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"2a3c9021-74d3-4a4a-9437-2c03e6e09113\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"298\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"fa7b18f5-25f3-4bb7-851a-e8412e4ab7bb\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"fa7b18f5-25f3-4bb7-851a-e8412e4ab7bb\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"4be6a7f1-30f5-446f-a4d0-b5947660f8a7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"308\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"4be6a7f1-30f5-446f-a4d0-b5947660f8a7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"308\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"1abccae5-ef7b-4304-a5b4-9d4f27417963\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"1abccae5-ef7b-4304-a5b4-9d4f27417963\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"d4514d18-c81e-41a0-9c7a-0d94dad5859a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805877\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"d4514d18-c81e-41a0-9c7a-0d94dad5859a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805877\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"1d14524b-9326-43fa-96cf-fbf17a9bfd33\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805880\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"1d14524b-9326-43fa-96cf-fbf17a9bfd33\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"9805880\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"5a9c351d-8f33-49d6-9f5d-42c154f445a3\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"344\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"5a9c351d-8f33-49d6-9f5d-42c154f445a3\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"344\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"dc6cffa2-e162-441f-9bdc-7a09b390ce0d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: time="2022-02-04T06:29:30+01:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"dc6cffa2-e162-441f-9bdc-7a09b390ce0d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.782974 859533 controllermanager.go:186] Version: v1.22.6+k3s1
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784177 859533 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784181 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784184 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784194 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784190 859533 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784190 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784341 859533 secure_serving.go:200] Serving securely on 127.0.0.1:10257
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784370 859533 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.784908 859533 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.874753 859533 controllermanager.go:142] Version: v1.22.6+k3s1
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876720 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876729 859533 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876743 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876735 859533 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876721 859533 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.876834 859533 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.877200 859533 secure_serving.go:200] Serving securely on 127.0.0.1:10258
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.877308 859533 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.878655 859533 leaderelection.go:248] attempting to acquire leader lease kube-system/cloud-controller-manager...
Feb 04 06:29:30 delightful k3s[859533]: E0204 06:29:30.881808 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.884573 859533 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.884590 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.884573 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.977190 859533 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.977198 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Feb 04 06:29:30 delightful k3s[859533]: I0204 06:29:30.977211 859533 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.368336 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.369962 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.376752 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.377762 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting apps/v1, Kind=Deployment controller"
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
Feb 04 06:29:31 delightful k3s[859533]: I0204 06:29:31.381425 859533 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.384448 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.385467 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.390284 859533 memcache.go:196] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: E0204 06:29:31.393856 859533 memcache.go:101] couldn't get resource list for cluster.loft.sh/v1: the server is currently unable to handle the request
Feb 04 06:29:31 delightful k3s[859533]: time="2022-02-04T06:29:31+01:00" level=info msg="Starting batch/v1, Kind=Job controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=ServiceAccount controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=Service controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=Pod controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=Endpoints controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=ConfigMap controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Starting /v1, Kind=Secret controller"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Updating TLS secret for k3s-serving (count: 22): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.1:192.168.100.1 listener.cattle.io/cn-192.168.100.2:192.168.100.2 listener.cattle.io/cn-192.168.100.3:192.168.100.3 listener.cattle.io/cn-65.108.6.254:65.108.6.254 listener.cattle.io/cn-65.108.6.254_6443-6ade02:65.108.6.254:6443 listener.cattle.io/cn-65.108.66.126:65.108.66.126 listener.cattle.io/cn-65.108.66.126_6443-2502bf:65.108.66.126:6443 listener.cattle.io/cn-65.108.75.198:65.108.75.198 listener.cattle.io/cn-65.108.75.198_6443-b79d12:65.108.75.198:6443 listener.cattle.io/cn-capital:capital listener.cattle.io/cn-capital.bottled.codes:capital.bottled.codes listener.cattle.io/cn-delightful:delightful listener.cattle.io/cn-delightful.bottled.codes:delightful.bottled.codes listener.cattle.io/cn-donkey:donkey listener.cattle.io/cn-donkey.bottled.codes:donkey.bottled.codes listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=09DD11925D86ED998F4F643A0F3F163F9B8697F7]"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Active TLS secret k3s-serving (ver=6048899) (count 22): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.1:192.168.100.1 listener.cattle.io/cn-192.168.100.2:192.168.100.2 listener.cattle.io/cn-192.168.100.3:192.168.100.3 listener.cattle.io/cn-65.108.6.254:65.108.6.254 listener.cattle.io/cn-65.108.6.254_6443-6ade02:65.108.6.254:6443 listener.cattle.io/cn-65.108.66.126:65.108.66.126 listener.cattle.io/cn-65.108.66.126_6443-2502bf:65.108.66.126:6443 listener.cattle.io/cn-65.108.75.198:65.108.75.198 listener.cattle.io/cn-65.108.75.198_6443-b79d12:65.108.75.198:6443 listener.cattle.io/cn-capital:capital listener.cattle.io/cn-capital.bottled.codes:capital.bottled.codes listener.cattle.io/cn-delightful:delightful listener.cattle.io/cn-delightful.bottled.codes:delightful.bottled.codes listener.cattle.io/cn-donkey:donkey listener.cattle.io/cn-donkey.bottled.codes:donkey.bottled.codes listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=09DD11925D86ED998F4F643A0F3F163F9B8697F7]"
Feb 04 06:29:32 delightful k3s[859533]: time="2022-02-04T06:29:32+01:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=delightful --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Feb 04 06:29:32 delightful k3s[859533]: W0204 06:29:32.613580 859533 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.630328 859533 node.go:172] Successfully retrieved node IP: 192.168.100.2
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.630340 859533 server_others.go:140] Detected node IP 192.168.100.2
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.631897 859533 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.631920 859533 server_others.go:212] Using iptables Proxier.
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.631931 859533 server_others.go:219] creating dualStackProxier for iptables.
Feb 04 06:29:32 delightful k3s[859533]: W0204 06:29:32.631944 859533 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.632157 859533 server.go:649] Version: v1.22.6+k3s1
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.632508 859533 config.go:315] Starting service config controller
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.632519 859533 shared_informer.go:240] Waiting for caches to sync for service config
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.632526 859533 config.go:224] Starting endpoint slice config controller
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.632534 859533 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.635626 859533 controller.go:611] quota admission added evaluator for: events.events.k8s.io
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.690999 859533 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691122 859533 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691179 859533 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691201 859533 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691210 859533 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691229 859533 state_mem.go:36] "Initialized new in-memory state store"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691339 859533 kubelet.go:418] "Attempting to sync node with API server"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691354 859533 kubelet.go:279] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691375 859533 kubelet.go:290] "Adding apiserver pod source"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691393 859533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.691806 859533 kuberuntime_manager.go:245] "Container runtime initialized" containerRuntime="containerd" version="v1.5.9-k3s1" apiVersion="v1alpha2"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.692260 859533 server.go:1213] "Started kubelet"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.692365 859533 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Feb 04 06:29:32 delightful k3s[859533]: E0204 06:29:32.693055 859533 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 04 06:29:32 delightful k3s[859533]: E0204 06:29:32.693074 859533 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.693225 859533 scope.go:110] "RemoveContainer" containerID="828f6c3d1db5ace266a334678a523cb2bc3e7cb0ec2d74936a3befb84a20732f"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.693342 859533 server.go:409] "Adding debug handlers to kubelet server"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.694961 859533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.695046 859533 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.695097 859533 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.701541 859533 scope.go:110] "RemoveContainer" containerID="e2ea6f588ceac2d2f26d4734006cd2e9a04d2c2b5e424cb49dad1d00b3bd12d1"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.704607 859533 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.707777 859533 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.708427 859533 scope.go:110] "RemoveContainer" containerID="e12fb4e2d90851712a86b7424fff527ecf5de0d5e5e8b4483429811e516eedd8"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.716156 859533 scope.go:110] "RemoveContainer" containerID="b9b2d928bbbe1184b3c33bb95144631286dd237b5d52c2c815ee54ac52aeb2c0"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.717068 859533 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.717083 859533 status_manager.go:158] "Starting to sync pod status with apiserver"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.717094 859533 kubelet.go:1967] "Starting kubelet main sync loop"
Feb 04 06:29:32 delightful k3s[859533]: E0204 06:29:32.717123 859533 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.722440 859533 scope.go:110] "RemoveContainer" containerID="58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.727201 859533 scope.go:110] "RemoveContainer" containerID="7360c78e30cbf8479c8703e192c2507b32c4681cd10f370bb5c02bdd7066ca22"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.732565 859533 shared_informer.go:247] Caches are synced for endpoint slice config
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.732589 859533 shared_informer.go:247] Caches are synced for service config
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.798406 859533 kuberuntime_manager.go:1078] "Updating runtime config through cri with podcidr" CIDR="10.42.1.0/24"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.798964 859533 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.1.0/24"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.801462 859533 kubelet_node_status.go:71] "Attempting to register node" node="delightful"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.811267 859533 kubelet_node_status.go:109] "Node was previously registered" node="delightful"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.811326 859533 kubelet_node_status.go:74] "Successfully registered node" node="delightful"
Feb 04 06:29:32 delightful k3s[859533]: I0204 06:29:32.814611 859533 setters.go:577] "Node became not ready" node="delightful" condition={Type:Ready Status:False LastHeartbeatTime:2022-02-04 06:29:32.814581167 +0100 CET m=+8.372720770 LastTransitionTime:2022-02-04 06:29:32.814581167 +0100 CET m=+8.372720770 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Feb 04 06:29:32 delightful k3s[859533]: E0204 06:29:32.817146 859533 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 04 06:29:33 delightful k3s[859533]: E0204 06:29:33.017806 859533 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051441 859533 cpu_manager.go:209] "Starting CPU manager" policy="none"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051458 859533 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051476 859533 state_mem.go:36] "Initialized new in-memory state store"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051689 859533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051704 859533 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.051711 859533 policy_none.go:49] "None policy: Start"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.052573 859533 memory_manager.go:168] "Starting memorymanager" policy="None"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.052590 859533 state_mem.go:35] "Initializing new in-memory state store"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.052706 859533 state_mem.go:75] "Updated machine memory state"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.055272 859533 manager.go:609] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.055547 859533 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.106209 859533 trace.go:205] Trace[241460775]: "List" url:/api/v1/secrets,user-agent:Netdata/auto-discovery,audit-id:37cefb0b-57b8-4659-954c-9cec7b82bd11,client:65.108.6.254,accept:application/json, */*,protocol:HTTP/1.1 (04-Feb-2022 06:29:32.344) (total time: 761ms):
Feb 04 06:29:33 delightful k3s[859533]: Trace[241460775]: ---"Writing http response done" count:152 761ms (06:29:33.106)
Feb 04 06:29:33 delightful k3s[859533]: Trace[241460775]: [761.267197ms] [761.267197ms] END
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418418 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4cab4f7b793c8cf17a061b003a0198311cadb4ac491a51bd2a223d2c20619d19"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418454 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1a94d11d6c04cc56404b3dcd47eb7caf3af1482d8f14075108a39507b0f0d396"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418475 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d9cafa8614f3fecfce3cc1648d16282741be313ca73e44de82104fca18aac8d9"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418514 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7d0f523e8890ae12c752200793d4c67ef9335347f318145b624e2757880a818f"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418546 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="165f1604a52a2732e970048596d08a6557b40bdd35635655b9e9718b35273e29"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418572 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bfa7d9224ee2a7563c6b39087978689e8f4f3f12eb1f5428d8ff5394b455642b"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418576 859533 scope.go:110] "RemoveContainer" containerID="58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418621 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4576f6c01278d5d294d12ba4349d7d1dcc22858cf0ed9fa83f788e8097d01a6d"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418641 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2fefefb29ad9f1889646bebe82883eb3ddd30b6cc935fdad03cd17ad4d5ff000"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418670 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b9b2d928bbbe1184b3c33bb95144631286dd237b5d52c2c815ee54ac52aeb2c0"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418699 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7360c78e30cbf8479c8703e192c2507b32c4681cd10f370bb5c02bdd7066ca22"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418734 859533 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e9a985cc7369031fb57bd32b7b01b397190e3ba502ecb5a40f3649264906eb8b"
Feb 04 06:29:33 delightful k3s[859533]: E0204 06:29:33.418967 859533 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87\": not found" containerID="58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.418998 859533 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87} err="failed to get container status \"58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87\": rpc error: code = NotFound desc = an error occurred when try to find container \"58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87\": not found"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.419010 859533 scope.go:110] "RemoveContainer" containerID="58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.419234 859533 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87} err="failed to get container status \"58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87\": rpc error: code = NotFound desc = an error occurred when try to find container \"58f2641cd364195ada26ad884511c167042d14232b6ca679fe6c1c80b9e41f87\": not found"
Feb 04 06:29:33 delightful k3s[859533]: E0204 06:29:33.442638 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.691913 859533 apiserver.go:52] "Watching apiserver"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700510 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700583 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700620 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700672 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700739 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700810 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.700969 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701057 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701130 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701273 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701324 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701392 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701674 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701740 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.701958 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.702126 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.702258 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.702378 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.702757 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.702870 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703186 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703285 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703393 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703585 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703796 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.703918 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704096 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704283 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704440 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704567 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704691 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.704952 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.705012 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.705045 859533 topology_manager.go:200] "Topology Admit Handler"
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803473 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56smv\" (UniqueName: \"kubernetes.io/projected/ddd82d90-763d-4312-8af6-6ac631c5c2c9-kube-api-access-56smv\") pod \"kured-qtl9q\" (UID: \"ddd82d90-763d-4312-8af6-6ac631c5c2c9\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803498 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"engine-binaries\" (UniqueName: \"kubernetes.io/host-path/c297fe84-2ff2-4c44-9006-282e616dd266-engine-binaries\") pod \"instance-manager-e-f3483ead\" (UID: \"c297fe84-2ff2-4c44-9006-282e616dd266\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803517 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6v6\" (UniqueName: \"kubernetes.io/projected/37ab04ba-c1e1-4866-b3cd-aaee7f74f9f9-kube-api-access-fm6v6\") pod \"metrics-server-ff9dbcb6c-dhgcq\" (UID: \"37ab04ba-c1e1-4866-b3cd-aaee7f74f9f9\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803535 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume\") pod \"coredns-96cc4f57d-n2l8m\" (UID: \"ce4eee6d-0702-44c4-97b7-01b1fe847ee5\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803558 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szq28\" (UniqueName: \"kubernetes.io/projected/07b01526-fd85-4e46-afbb-af16611cc427-kube-api-access-szq28\") pod \"magic-vpn-576c7c8449-tnxkw\" (UID: \"07b01526-fd85-4e46-afbb-af16611cc427\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803582 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/0214b3c4-dc36-43bc-819b-fc418393455f-proc\") pod \"kube-prometheus-stack-prometheus-node-exporter-hv4nm\" (UID: \"0214b3c4-dc36-43bc-819b-fc418393455f\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803652 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6j6\" (UniqueName: \"kubernetes.io/projected/0517e2da-bcac-4a52-9c35-4476279bd96e-kube-api-access-5t6j6\") pod \"cert-manager-webhook-f588b48b8-6ml4x\" (UID: \"0517e2da-bcac-4a52-9c35-4476279bd96e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803701 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbdp\" (UniqueName: \"kubernetes.io/projected/ea1cf136-0b2a-4d59-9cf6-6ae588847430-kube-api-access-xmbdp\") pod \"magic-vpn-576c7c8449-nffn4\" (UID: \"ea1cf136-0b2a-4d59-9cf6-6ae588847430\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803744 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c297fe84-2ff2-4c44-9006-282e616dd266-dev\") pod \"instance-manager-e-f3483ead\" (UID: \"c297fe84-2ff2-4c44-9006-282e616dd266\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803790 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0214b3c4-dc36-43bc-819b-fc418393455f-sys\") pod \"kube-prometheus-stack-prometheus-node-exporter-hv4nm\" (UID: \"0214b3c4-dc36-43bc-819b-fc418393455f\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803833 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2gh\" (UniqueName: \"kubernetes.io/projected/9c3d6657-fbf4-4eea-b152-0d6dfd2739fa-kube-api-access-2p2gh\") pod \"cert-manager-847544bbd-wsmcv\" (UID: \"9c3d6657-fbf4-4eea-b152-0d6dfd2739fa\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803870 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/28cf47ad-521d-47d3-8335-bc2c9135202e-proc\") pod \"longhorn-manager-4c68g\" (UID: \"28cf47ad-521d-47d3-8335-bc2c9135202e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803915 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltq8\" (UniqueName: \"kubernetes.io/projected/805f879c-df25-4110-82dc-9cdfb9fd8ed1-kube-api-access-xltq8\") pod \"infrastructure-auth-oauth2-proxy-574c4c5d7-xvgdh\" (UID: \"805f879c-df25-4110-82dc-9cdfb9fd8ed1\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803952 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj64d\" (UniqueName: \"kubernetes.io/projected/37c64531-8194-4d82-abf9-f87e25fe9a61-kube-api-access-kj64d\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.803991 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-config\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804030 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"persistencevarlibdir\" (UniqueName: \"kubernetes.io/host-path/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-persistencevarlibdir\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804071 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkg4\" (UniqueName: \"kubernetes.io/projected/076ee88e-0d3e-431f-bd0c-81cdf79fdd73-kube-api-access-9bkg4\") pod \"loft-agent-7f89866d89-cdfq9\" (UID: \"076ee88e-0d3e-431f-bd0c-81cdf79fdd73\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804111 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989pr\" (UniqueName: \"kubernetes.io/projected/0b477933-6543-4e47-8c33-af20a95c2f75-kube-api-access-989pr\") pod \"ingress-nginx-controller-5c4b454d48-5kf9t\" (UID: \"0b477933-6543-4e47-8c33-af20a95c2f75\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804154 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume\") pod \"coredns-96cc4f57d-n2l8m\" (UID: \"ce4eee6d-0702-44c4-97b7-01b1fe847ee5\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804191 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpxm\" (UniqueName: \"kubernetes.io/projected/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-kube-api-access-cqpxm\") pod \"coredns-96cc4f57d-n2l8m\" (UID: \"ce4eee6d-0702-44c4-97b7-01b1fe847ee5\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804231 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ec0456d2-cb98-4427-9fbd-aab06faef4c3-host\") pod \"instance-manager-r-f12e5652\" (UID: \"ec0456d2-cb98-4427-9fbd-aab06faef4c3\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804271 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9bp\" (UniqueName: \"kubernetes.io/projected/94040b94-6377-4e6d-ae07-3864c58580a8-kube-api-access-5b9bp\") pod \"magic-vpn-576c7c8449-2ttp4\" (UID: \"94040b94-6377-4e6d-ae07-3864c58580a8\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804312 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aecf6c70-e83b-495a-b4c5-4a887f643a0a-tls-secret\") pod \"kube-prometheus-stack-operator-c89c44d9-bqrm4\" (UID: \"aecf6c70-e83b-495a-b4c5-4a887f643a0a\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804355 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-config\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804393 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-web-config\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804443 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-kube-prometheus-stack-prometheus-db\" (UniqueName: \"kubernetes.io/empty-dir/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-db\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804489 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwjj\" (UniqueName: \"kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-kube-api-access-hbwjj\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804523 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh\" (UniqueName: \"kubernetes.io/secret/ea1cf136-0b2a-4d59-9cf6-6ae588847430-ssh\") pod \"magic-vpn-576c7c8449-nffn4\" (UID: \"ea1cf136-0b2a-4d59-9cf6-6ae588847430\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804561 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/28cf47ad-521d-47d3-8335-bc2c9135202e-dev\") pod \"longhorn-manager-4c68g\" (UID: \"28cf47ad-521d-47d3-8335-bc2c9135202e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804598 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/c297fe84-2ff2-4c44-9006-282e616dd266-proc\") pod \"instance-manager-e-f3483ead\" (UID: \"c297fe84-2ff2-4c44-9006-282e616dd266\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804663 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/37ab04ba-c1e1-4866-b3cd-aaee7f74f9f9-tmp-dir\") pod \"metrics-server-ff9dbcb6c-dhgcq\" (UID: \"37ab04ba-c1e1-4866-b3cd-aaee7f74f9f9\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804708 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/35bb2e01-dcd2-47f7-a405-c434e09892bd-config-out\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804735 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sc-dashboard-provider\" (UniqueName: \"kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-provider\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804757 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4bf37305-e737-4b40-98b4-9fe8645039a5-data\") pod \"engine-image-ei-fa2dfbf0-7lrmc\" (UID: \"4bf37305-e737-4b40-98b4-9fe8645039a5\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804776 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-tls-assets\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804833 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74n2\" (UniqueName: \"kubernetes.io/projected/315cb7b5-5eed-490e-99b8-802a1837b0df-kube-api-access-f74n2\") pod \"jspolicy-86c9d569f8-cq25p\" (UID: \"315cb7b5-5eed-490e-99b8-802a1837b0df\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804874 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-webhook-cert\") pod \"ingress-nginx-controller-5c4b454d48-9hnmx\" (UID: \"90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804902 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlpg\" (UniqueName: \"kubernetes.io/projected/4bf37305-e737-4b40-98b4-9fe8645039a5-kube-api-access-swlpg\") pod \"engine-image-ei-fa2dfbf0-7lrmc\" (UID: \"4bf37305-e737-4b40-98b4-9fe8645039a5\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804936 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sbwm\" (UniqueName: \"kubernetes.io/projected/ec0456d2-cb98-4427-9fbd-aab06faef4c3-kube-api-access-7sbwm\") pod \"instance-manager-r-f12e5652\" (UID: \"ec0456d2-cb98-4427-9fbd-aab06faef4c3\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.804970 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62bq\" (UniqueName: \"kubernetes.io/projected/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-kube-api-access-q62bq\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805003 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvvg\" (UniqueName: \"kubernetes.io/projected/aecf6c70-e83b-495a-b4c5-4a887f643a0a-kube-api-access-rtvvg\") pod \"kube-prometheus-stack-operator-c89c44d9-bqrm4\" (UID: \"aecf6c70-e83b-495a-b4c5-4a887f643a0a\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805034 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tz7t\" (UniqueName: \"kubernetes.io/projected/7ac3a69f-cdc0-4f52-9342-5e471fda83ba-kube-api-access-7tz7t\") pod \"kube-prometheus-stack-kube-state-metrics-cf78d48d9-qxzsm\" (UID: \"7ac3a69f-cdc0-4f52-9342-5e471fda83ba\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805063 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c64531-8194-4d82-abf9-f87e25fe9a61-scripts\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805093 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85xh\" (UniqueName: \"kubernetes.io/projected/61bbc4cd-a62f-486f-a77e-41e046831de1-kube-api-access-b85xh\") pod \"ingress-nginx-controller-5c4b454d48-xgld5\" (UID: \"61bbc4cd-a62f-486f-a77e-41e046831de1\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805122 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sc-datasources-volume\" (UniqueName: \"kubernetes.io/empty-dir/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-datasources-volume\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805159 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-kube-prometheus-stack-alertmanager-db\" (UniqueName: \"kubernetes.io/empty-dir/318b1514-76f4-482a-bd40-da3cd7f60338-alertmanager-kube-prometheus-stack-alertmanager-db\") pod \"alertmanager-kube-prometheus-stack-alertmanager-0\" (UID: \"318b1514-76f4-482a-bd40-da3cd7f60338\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805185 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-proc\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805220 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-run\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805252 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn9t\" (UniqueName: \"kubernetes.io/projected/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-kube-api-access-bzn9t\") pod \"ingress-nginx-controller-5c4b454d48-9hnmx\" (UID: \"90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805280 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfkj\" (UniqueName: \"kubernetes.io/projected/a6c59db6-6f50-42b5-aaf3-844e18213743-kube-api-access-dkfkj\") pod \"external-dns-75dfff5574-qmqgb\" (UID: \"a6c59db6-6f50-42b5-aaf3-844e18213743\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805311 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wjs\" (UniqueName: \"kubernetes.io/projected/be8340be-1e48-4367-87bc-11eb9d51f544-kube-api-access-p8wjs\") pod \"cert-manager-cainjector-5c747645bf-kdfz5\" (UID: \"be8340be-1e48-4367-87bc-11eb9d51f544\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805342 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8b4\" (UniqueName: \"kubernetes.io/projected/c297fe84-2ff2-4c44-9006-282e616dd266-kube-api-access-9g8b4\") pod \"instance-manager-e-f3483ead\" (UID: \"c297fe84-2ff2-4c44-9006-282e616dd266\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805369 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwtr\" (UniqueName: \"kubernetes.io/projected/8380b1f6-d9c0-4370-a903-f9c4f4144525-kube-api-access-bvwtr\") pod \"peer-validation-server-746bcfbb97-nqgmt\" (UID: \"8380b1f6-d9c0-4370-a903-f9c4f4144525\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805397 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k3s-agent\" (UniqueName: \"kubernetes.io/host-path/37c64531-8194-4d82-abf9-f87e25fe9a61-k3s-agent\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805423 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61bbc4cd-a62f-486f-a77e-41e046831de1-webhook-cert\") pod \"ingress-nginx-controller-5c4b454d48-xgld5\" (UID: \"61bbc4cd-a62f-486f-a77e-41e046831de1\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805458 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/empty-dir/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-storage\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805483 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"longhorn\" (UniqueName: \"kubernetes.io/host-path/28cf47ad-521d-47d3-8335-bc2c9135202e-longhorn\") pod \"longhorn-manager-4c68g\" (UID: \"28cf47ad-521d-47d3-8335-bc2c9135202e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805513 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmain\" (UniqueName: \"kubernetes.io/configmap/805f879c-df25-4110-82dc-9cdfb9fd8ed1-configmain\") pod \"infrastructure-auth-oauth2-proxy-574c4c5d7-xvgdh\" (UID: \"805f879c-df25-4110-82dc-9cdfb9fd8ed1\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805545 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-kube-prometheus-stack-prometheus-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0\") pod \"prometheus-kube-prometheus-stack-prometheus-0\" (UID: \"35bb2e01-dcd2-47f7-a405-c434e09892bd\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805572 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kilo-dir\" (UniqueName: \"kubernetes.io/host-path/37c64531-8194-4d82-abf9-f87e25fe9a61-kilo-dir\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805594 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37c64531-8194-4d82-abf9-f87e25fe9a61-lib-modules\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805622 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-sys\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805664 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gml\" (UniqueName: \"kubernetes.io/projected/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-kube-api-access-t4gml\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805699 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37c64531-8194-4d82-abf9-f87e25fe9a61-xtables-lock\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805754 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sc-dashboard-volume\" (UniqueName: \"kubernetes.io/empty-dir/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-volume\") pod \"kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj\" (UID: \"b9cc3efc-dd26-4c40-bff5-12142a0bbd5e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805807 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-config\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805847 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0214b3c4-dc36-43bc-819b-fc418393455f-root\") pod \"kube-prometheus-stack-prometheus-node-exporter-hv4nm\" (UID: \"0214b3c4-dc36-43bc-819b-fc418393455f\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805890 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls\" (UniqueName: \"kubernetes.io/secret/8380b1f6-d9c0-4370-a903-f9c4f4144525-tls\") pod \"peer-validation-server-746bcfbb97-nqgmt\" (UID: \"8380b1f6-d9c0-4370-a903-f9c4f4144525\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805929 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b997d\" (UniqueName: \"kubernetes.io/projected/28cf47ad-521d-47d3-8335-bc2c9135202e-kube-api-access-b997d\") pod \"longhorn-manager-4c68g\" (UID: \"28cf47ad-521d-47d3-8335-bc2c9135202e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.805965 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/empty-dir/37c64531-8194-4d82-abf9-f87e25fe9a61-kubeconfig\") pod \"kilo-sx96l\" (UID: \"37c64531-8194-4d82-abf9-f87e25fe9a61\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806000 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-os-release\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806037 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sd-shared\" (UniqueName: \"kubernetes.io/empty-dir/bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b-sd-shared\") pod \"netdata-child-rntmc\" (UID: \"bfe9ff51-8e58-4c47-8c91-f6c3d4fa408b\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806077 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh\" (UniqueName: \"kubernetes.io/secret/07b01526-fd85-4e46-afbb-af16611cc427-ssh\") pod \"magic-vpn-576c7c8449-tnxkw\" (UID: \"07b01526-fd85-4e46-afbb-af16611cc427\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806120 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume\") pod \"alertmanager-kube-prometheus-stack-alertmanager-0\" (UID: \"318b1514-76f4-482a-bd40-da3cd7f60338\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806159 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets\") pod \"alertmanager-kube-prometheus-stack-alertmanager-0\" (UID: \"318b1514-76f4-482a-bd40-da3cd7f60338\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806197 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxgc\" (UniqueName: \"kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-kube-api-access-snxgc\") pod \"alertmanager-kube-prometheus-stack-alertmanager-0\" (UID: \"318b1514-76f4-482a-bd40-da3cd7f60338\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806227 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh\" (UniqueName: \"kubernetes.io/secret/94040b94-6377-4e6d-ae07-3864c58580a8-ssh\") pod \"magic-vpn-576c7c8449-2ttp4\" (UID: \"94040b94-6377-4e6d-ae07-3864c58580a8\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806259 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"longhorn-default-setting\" (UniqueName: \"kubernetes.io/configmap/28cf47ad-521d-47d3-8335-bc2c9135202e-longhorn-default-setting\") pod \"longhorn-manager-4c68g\" (UID: \"28cf47ad-521d-47d3-8335-bc2c9135202e\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806290 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8r8\" (UniqueName: \"kubernetes.io/projected/b38723c2-29a2-4f5c-afbc-99a68c6cdc22-kube-api-access-2x8r8\") pod \"loft-6fcc8d5546-7fqd8\" (UID: \"b38723c2-29a2-4f5c-afbc-99a68c6cdc22\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806319 859533 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b477933-6543-4e47-8c33-af20a95c2f75-webhook-cert\") pod \"ingress-nginx-controller-5c4b454d48-5kf9t\" (UID: \"0b477933-6543-4e47-8c33-af20a95c2f75\") "
Feb 04 06:29:33 delightful k3s[859533]: I0204 06:29:33.806352 859533 reconciler.go:157] "Reconciler: start to sync state"
Feb 04 06:29:34 delightful k3s[859533]: time="2022-02-04T06:29:34+01:00" level=info msg="Stopped tunnel to 127.0.0.1:6443"
Feb 04 06:29:34 delightful k3s[859533]: time="2022-02-04T06:29:34+01:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Feb 04 06:29:34 delightful k3s[859533]: time="2022-02-04T06:29:34+01:00" level=info msg="Connecting to proxy" url="wss://192.168.100.2:6443/v1-k3s/connect"
Feb 04 06:29:34 delightful k3s[859533]: time="2022-02-04T06:29:34+01:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Feb 04 06:29:34 delightful k3s[859533]: time="2022-02-04T06:29:34+01:00" level=info msg="Handling backend connection request [delightful]"
Feb 04 06:29:34 delightful k3s[859533]: I0204 06:29:34.891736 859533 request.go:665] Waited for 1.190267236s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/api/v1/namespaces/prometheus/secrets?fieldSelector=metadata.name%3Dkube-prometheus-stack-admission&limit=500&resourceVersion=0
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910787 859533 projected.go:268] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus-tls-assets-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910802 859533 secret.go:195] Couldn't get secret kilo/peer-validation-webhook-tls: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910806 859533 configmap.go:200] Couldn't get configMap kube-system/coredns-custom: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910823 859533 secret.go:195] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus-web-config: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910833 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910802 859533 secret.go:195] Couldn't get secret vpn/ssh: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910848 859533 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910809 859533 projected.go:199] Error preparing data for projected volume tls-assets for pod prometheus/prometheus-kube-prometheus-stack-prometheus-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910866 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/8380b1f6-d9c0-4370-a903-f9c4f4144525-tls podName:8380b1f6-d9c0-4370-a903-f9c4f4144525 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410845732 +0100 CET m=+10.968985355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls" (UniqueName: "kubernetes.io/secret/8380b1f6-d9c0-4370-a903-f9c4f4144525-tls") pod "peer-validation-server-746bcfbb97-nqgmt" (UID: "8380b1f6-d9c0-4370-a903-f9c4f4144525") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910811 859533 projected.go:268] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-tls-assets-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910886 859533 projected.go:199] Error preparing data for projected volume tls-assets for pod prometheus/alertmanager-kube-prometheus-stack-alertmanager-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910893 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/0b477933-6543-4e47-8c33-af20a95c2f75-webhook-cert podName:0b477933-6543-4e47-8c33-af20a95c2f75 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410873022 +0100 CET m=+10.969012645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0b477933-6543-4e47-8c33-af20a95c2f75-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-5kf9t" (UID: "0b477933-6543-4e47-8c33-af20a95c2f75") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910903 859533 secret.go:195] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-generated: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910914 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-web-config podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410901112 +0100 CET m=+10.969040735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-web-config") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910917 859533 configmap.go:200] Couldn't get configMap prometheus/prometheus-kube-prometheus-stack-prometheus-rulefiles-0: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910921 859533 secret.go:195] Couldn't get secret vpn/ssh: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910929 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410919102 +0100 CET m=+10.969058725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "custom-config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910927 859533 secret.go:195] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910940 859533 secret.go:195] Couldn't get secret vpn/ssh: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910947 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/94040b94-6377-4e6d-ae07-3864c58580a8-ssh podName:94040b94-6377-4e6d-ae07-3864c58580a8 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410935622 +0100 CET m=+10.969075245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ssh" (UniqueName: "kubernetes.io/secret/94040b94-6377-4e6d-ae07-3864c58580a8-ssh") pod "magic-vpn-576c7c8449-2ttp4" (UID: "94040b94-6377-4e6d-ae07-3864c58580a8") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910952 859533 configmap.go:200] Couldn't get configMap prometheus/kube-prometheus-stack-grafana-config-dashboards: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910952 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910970 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410957692 +0100 CET m=+10.969097365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910974 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910944 859533 configmap.go:200] Couldn't get configMap prometheus/kube-prometheus-stack-grafana: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.910991 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-tls-assets podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410978452 +0100 CET m=+10.969118165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-tls-assets") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911011 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.410998992 +0100 CET m=+10.969138665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911031 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-webhook-cert podName:90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411016572 +0100 CET m=+10.969156205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-9hnmx" (UID: "90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911050 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411041432 +0100 CET m=+10.969181055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911069 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0 podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411056942 +0100 CET m=+10.969196565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-kube-prometheus-stack-prometheus-rulefiles-0" (UniqueName: "kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911085 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/07b01526-fd85-4e46-afbb-af16611cc427-ssh podName:07b01526-fd85-4e46-afbb-af16611cc427 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411076582 +0100 CET m=+10.969216205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ssh" (UniqueName: "kubernetes.io/secret/07b01526-fd85-4e46-afbb-af16611cc427-ssh") pod "magic-vpn-576c7c8449-tnxkw" (UID: "07b01526-fd85-4e46-afbb-af16611cc427") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911100 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-config podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411092002 +0100 CET m=+10.969231615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-config") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911116 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-provider podName:b9cc3efc-dd26-4c40-bff5-12142a0bbd5e nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411106272 +0100 CET m=+10.969245885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "sc-dashboard-provider" (UniqueName: "kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-provider") pod "kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj" (UID: "b9cc3efc-dd26-4c40-bff5-12142a0bbd5e") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911130 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/ea1cf136-0b2a-4d59-9cf6-6ae588847430-ssh podName:ea1cf136-0b2a-4d59-9cf6-6ae588847430 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411120822 +0100 CET m=+10.969260445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ssh" (UniqueName: "kubernetes.io/secret/ea1cf136-0b2a-4d59-9cf6-6ae588847430-ssh") pod "magic-vpn-576c7c8449-nffn4" (UID: "ea1cf136-0b2a-4d59-9cf6-6ae588847430") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911147 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/61bbc4cd-a62f-486f-a77e-41e046831de1-webhook-cert podName:61bbc4cd-a62f-486f-a77e-41e046831de1 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411134652 +0100 CET m=+10.969274265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/61bbc4cd-a62f-486f-a77e-41e046831de1-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-xgld5" (UID: "61bbc4cd-a62f-486f-a77e-41e046831de1") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.911167 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-config podName:b9cc3efc-dd26-4c40-bff5-12142a0bbd5e nodeName:}" failed. No retries permitted until 2022-02-04 06:29:35.411153232 +0100 CET m=+10.969292875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-config") pod "kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj" (UID: "b9cc3efc-dd26-4c40-bff5-12142a0bbd5e") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:34 delightful k3s[859533]: W0204 06:29:34.919939 859533 dispatcher.go:150] Failed calling webhook, failing closed quota.loft.sh: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused
Feb 04 06:29:34 delightful k3s[859533]: E0204 06:29:34.920441 859533 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"magic-vpn-576c7c8449-2ttp4.16d07e28272687d6", GenerateName:"", Namespace:"vpn", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"vpn", Name:"magic-vpn-576c7c8449-2ttp4", UID:"94040b94-6377-4e6d-ae07-3864c58580a8", APIVersion:"v1", ResourceVersion:"9829541", FieldPath:""}, Reason:"FailedMount", Message:"MountVolume.SetUp failed for volume \"ssh\" : failed to sync secret cache: timed out waiting for the condition", Source:v1.EventSource{Component:"kubelet", Host:"delightful"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64a9bd6, ext:10468998825, loc:(*time.Location)(0x7e27960)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64a9bd6, ext:10468998825, loc:(*time.Location)(0x7e27960)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Internal error occurred: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused' (will not retry!)
Feb 04 06:29:35 delightful k3s[859533]: W0204 06:29:35.161221 859533 dispatcher.go:150] Failed calling webhook, failing closed quota.loft.sh: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused
Feb 04 06:29:35 delightful k3s[859533]: E0204 06:29:35.161753 859533 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"magic-vpn-576c7c8449-tnxkw.16d07e282727de92", GenerateName:"", Namespace:"vpn", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"vpn", Name:"magic-vpn-576c7c8449-tnxkw", UID:"07b01526-fd85-4e46-afbb-af16611cc427", APIVersion:"v1", ResourceVersion:"9828367", FieldPath:""}, Reason:"FailedMount", Message:"MountVolume.SetUp failed for volume \"ssh\" : failed to sync secret cache: timed out waiting for the condition", Source:v1.EventSource{Component:"kubelet", Host:"delightful"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64bf292, ext:10469086585, loc:(*time.Location)(0x7e27960)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64bf292, ext:10469086585, loc:(*time.Location)(0x7e27960)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Internal error occurred: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused' (will not retry!)
Feb 04 06:29:35 delightful k3s[859533]: W0204 06:29:35.761363 859533 dispatcher.go:150] Failed calling webhook, failing closed quota.loft.sh: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused
Feb 04 06:29:35 delightful k3s[859533]: E0204 06:29:35.761821 859533 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"magic-vpn-576c7c8449-nffn4.16d07e2827283e78", GenerateName:"", Namespace:"vpn", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"vpn", Name:"magic-vpn-576c7c8449-nffn4", UID:"ea1cf136-0b2a-4d59-9cf6-6ae588847430", APIVersion:"v1", ResourceVersion:"9828420", FieldPath:""}, Reason:"FailedMount", Message:"MountVolume.SetUp failed for volume \"ssh\" : failed to sync secret cache: timed out waiting for the condition", Source:v1.EventSource{Component:"kubelet", Host:"delightful"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64c5278, ext:10469111145, loc:(*time.Location)(0x7e27960)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0774c4fb64c5278, ext:10469111145, loc:(*time.Location)(0x7e27960)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Internal error occurred: failed calling webhook "quota.loft.sh": Post "https://loft-agent-webhook.loft.svc:443/quota?timeout=10s": dial tcp 10.43.172.122:443: connect: connection refused' (will not retry!)
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.433985 859533 secret.go:195] Couldn't get secret kilo/peer-validation-webhook-tls: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.433995 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434007 859533 configmap.go:200] Couldn't get configMap prometheus/prometheus-kube-prometheus-stack-prometheus-rulefiles-0: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434055 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/61bbc4cd-a62f-486f-a77e-41e046831de1-webhook-cert podName:61bbc4cd-a62f-486f-a77e-41e046831de1 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434033417 +0100 CET m=+12.992173040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/61bbc4cd-a62f-486f-a77e-41e046831de1-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-xgld5" (UID: "61bbc4cd-a62f-486f-a77e-41e046831de1") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434077 859533 projected.go:268] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-tls-assets-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434086 859533 projected.go:199] Error preparing data for projected volume tls-assets for pod prometheus/alertmanager-kube-prometheus-stack-alertmanager-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434087 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/8380b1f6-d9c0-4370-a903-f9c4f4144525-tls podName:8380b1f6-d9c0-4370-a903-f9c4f4144525 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434065967 +0100 CET m=+12.992205590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls" (UniqueName: "kubernetes.io/secret/8380b1f6-d9c0-4370-a903-f9c4f4144525-tls") pod "peer-validation-server-746bcfbb97-nqgmt" (UID: "8380b1f6-d9c0-4370-a903-f9c4f4144525") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434104 859533 secret.go:195] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-generated: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434113 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0 podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434097997 +0100 CET m=+12.992237630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-kube-prometheus-stack-prometheus-rulefiles-0" (UniqueName: "kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434142 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434123347 +0100 CET m=+12.992262970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434158 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434148317 +0100 CET m=+12.992287940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434177 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434213 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/0b477933-6543-4e47-8c33-af20a95c2f75-webhook-cert podName:0b477933-6543-4e47-8c33-af20a95c2f75 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434203607 +0100 CET m=+12.992343230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0b477933-6543-4e47-8c33-af20a95c2f75-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-5kf9t" (UID: "0b477933-6543-4e47-8c33-af20a95c2f75") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434217 859533 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434228 859533 configmap.go:200] Couldn't get configMap prometheus/kube-prometheus-stack-grafana: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434256 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434243747 +0100 CET m=+12.992383400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434263 859533 configmap.go:200] Couldn't get configMap kube-system/coredns-custom: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434270 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-config podName:b9cc3efc-dd26-4c40-bff5-12142a0bbd5e nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434262217 +0100 CET m=+12.992401820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-config") pod "kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj" (UID: "b9cc3efc-dd26-4c40-bff5-12142a0bbd5e") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434295 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434285007 +0100 CET m=+12.992424630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "custom-config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434314 859533 secret.go:195] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.434343 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-config podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.434334477 +0100 CET m=+12.992474100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-config") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435080 859533 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435082 859533 configmap.go:200] Couldn't get configMap prometheus/kube-prometheus-stack-grafana-config-dashboards: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435091 859533 secret.go:195] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus-web-config: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435102 859533 projected.go:268] Couldn't get secret prometheus/prometheus-kube-prometheus-stack-prometheus-tls-assets-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435111 859533 projected.go:199] Error preparing data for projected volume tls-assets for pod prometheus/prometheus-kube-prometheus-stack-prometheus-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435125 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-webhook-cert podName:90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.435111819 +0100 CET m=+12.993251482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d-webhook-cert") pod "ingress-nginx-controller-5c4b454d48-9hnmx" (UID: "90d95ab3-ca18-4f1f-85d4-fe5bc93abb3d") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435150 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-web-config podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.435132379 +0100 CET m=+12.993272042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/35bb2e01-dcd2-47f7-a405-c434e09892bd-web-config") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435167 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-provider podName:b9cc3efc-dd26-4c40-bff5-12142a0bbd5e nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.435156029 +0100 CET m=+12.993295712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "sc-dashboard-provider" (UniqueName: "kubernetes.io/configmap/b9cc3efc-dd26-4c40-bff5-12142a0bbd5e-sc-dashboard-provider") pod "kube-prometheus-stack-grafana-5b6ddfdbbf-xcrzj" (UID: "b9cc3efc-dd26-4c40-bff5-12142a0bbd5e") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:36 delightful k3s[859533]: E0204 06:29:36.435181 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-tls-assets podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:37.435172519 +0100 CET m=+12.993312202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/35bb2e01-dcd2-47f7-a405-c434e09892bd-tls-assets") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.451663 859533 configmap.go:200] Couldn't get configMap kube-system/coredns-custom: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.451743 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:40.451721505 +0100 CET m=+16.009861128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "custom-config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-custom-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452733 859533 configmap.go:200] Couldn't get configMap prometheus/prometheus-kube-prometheus-stack-prometheus-rulefiles-0: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452745 859533 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452752 859533 projected.go:268] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-tls-assets-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452766 859533 projected.go:199] Error preparing data for projected volume tls-assets for pod prometheus/alertmanager-kube-prometheus-stack-alertmanager-0: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452769 859533 secret.go:195] Couldn't get secret prometheus/alertmanager-kube-prometheus-stack-alertmanager-generated: failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452800 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0 podName:35bb2e01-dcd2-47f7-a405-c434e09892bd nodeName:}" failed. No retries permitted until 2022-02-04 06:29:40.452780318 +0100 CET m=+16.010919941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-kube-prometheus-stack-prometheus-rulefiles-0" (UniqueName: "kubernetes.io/configmap/35bb2e01-dcd2-47f7-a405-c434e09892bd-prometheus-kube-prometheus-stack-prometheus-rulefiles-0") pod "prometheus-kube-prometheus-stack-prometheus-0" (UID: "35bb2e01-dcd2-47f7-a405-c434e09892bd") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452835 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:40.452810608 +0100 CET m=+16.010950231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/318b1514-76f4-482a-bd40-da3cd7f60338-config-volume") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452859 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets podName:318b1514-76f4-482a-bd40-da3cd7f60338 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:40.452847678 +0100 CET m=+16.010987301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/318b1514-76f4-482a-bd40-da3cd7f60338-tls-assets") pod "alertmanager-kube-prometheus-stack-alertmanager-0" (UID: "318b1514-76f4-482a-bd40-da3cd7f60338") : failed to sync secret cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.452877 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume podName:ce4eee6d-0702-44c4-97b7-01b1fe847ee5 nodeName:}" failed. No retries permitted until 2022-02-04 06:29:40.452867838 +0100 CET m=+16.011007461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce4eee6d-0702-44c4-97b7-01b1fe847ee5-config-volume") pod "coredns-96cc4f57d-n2l8m" (UID: "ce4eee6d-0702-44c4-97b7-01b1fe847ee5") : failed to sync configmap cache: timed out waiting for the condition
Feb 04 06:29:38 delightful k3s[859533]: E0204 06:29:38.563016 859533 available_controller.go:524] v1.cluster.loft.sh failed with: failing or missing response from https://10.43.20.221:443/apis/cluster.loft.sh/v1: Get "https://10.43.20.221:443/apis/cluster.loft.sh/v1": dial tcp 10.43.20.221:443: connect: connection refused
Feb 04 06:29:39 delightful k3s[859533]: I0204 06:29:39.391380 859533 controller.go:611] quota admission added evaluator for: namespaces
Feb 04 06:29:40 delightful k3s[859533]: I0204 06:29:40.304888 859533 scope.go:110] "RemoveContainer" containerID="4abcb15e757a876c14843dc22688b953d46e80ab1994cc89b24d583128c16789"
Feb 04 06:29:40 delightful k3s[859533]: I0204 06:29:40.505158 859533 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Feb 04 06:29:40 delightful k3s[859533]: I0204 06:29:40.903114 859533 scope.go:110] "RemoveContainer" containerID="c6b41cf8d7e636592f429d86ab29dab77e06d5d23337365eaf3147c297a41d1e"
Feb 04 06:29:40 delightful k3s[859533]: I0204 06:29:40.903782 859533 scope.go:110] "RemoveContainer" containerID="41cec39ba2f2cf0e931a2348f02c34baf32914c75fe977cf648677a60261d163"
Feb 04 06:29:40 delightful k3s[859533]: I0204 06:29:40.903794 859533 scope.go:110] "RemoveContainer" containerID="1e64add43c80d57fa572134fe8bd9d636e441c05de6b47d3ab222336057e2a7a"
Feb 04 06:29:41 delightful k3s[859533]: E0204 06:29:41.095571 859533 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb9cc3efc-dd26-4c40-bff5-12142a0bbd5e/02d6f8dea84832408749ff7850eaecd7c60e75cf9ce0318ba357de5cb0c57edc\": RecentStats: unable to find data in memory cache]"
Feb 04 06:29:42 delightful k3s[859533]: I0204 06:29:42.103919 859533 scope.go:110] "RemoveContainer" containerID="465350eca47e24145125658beaad7d692e7a031614a7236fb7f04d66aef21154"
Feb 04 06:29:42 delightful k3s[859533]: I0204 06:29:42.714371 859533 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Feb 04 06:29:42 delightful k3s[859533]: I0204 06:29:42.768527 859533 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Feb 04 06:29:45 delightful k3s[859533]: I0204 06:29:45.091633 859533 request.go:665] Waited for 11.180656779s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/kube-system/serviceaccounts/kilo/token
Feb 04 06:29:45 delightful k3s[859533]: I0204 06:29:45.402262 859533 scope.go:110] "RemoveContainer" containerID="75367857a11740e260040ce4ad3f4b5160367161ab19e613940f9bfe4760ec73"
Feb 04 06:29:46 delightful k3s[859533]: I0204 06:29:46.462093 859533 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler
Feb 04 06:29:47 delightful k3s[859533]: I0204 06:29:47.252325 859533 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
Feb 04 06:29:47 delightful k3s[859533]: I0204 06:29:47.252399 859533 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="delightful_84f07a69-7b45-40a9-900b-2746049ec2c3 became leader"
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.146898 859533 leaderelection.go:258] successfully acquired lease kube-system/cloud-controller-manager
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.146963 859533 event.go:291] "Event occurred" object="kube-system/cloud-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="delightful_81c496d6-5aa8-46a1-99e5-c7534e136fe8 became leader"
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.903530 859533 node_controller.go:115] Sending events to api server.
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.903603 859533 controllermanager.go:285] Started "cloud-node"
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.903670 859533 node_controller.go:154] Waiting for informer caches to sync
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.904593 859533 node_lifecycle_controller.go:76] Sending events to api server
Feb 04 06:29:49 delightful k3s[859533]: I0204 06:29:49.904616 859533 controllermanager.go:285] Started "cloud-node-lifecycle"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.260023 859533 shared_informer.go:240] Waiting for caches to sync for tokens
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.264710 859533 controllermanager.go:577] Started "disruption"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.264808 859533 disruption.go:363] Starting disruption controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.264833 859533 shared_informer.go:240] Waiting for caches to sync for disruption
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.267118 859533 controllermanager.go:577] Started "statefulset"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.267194 859533 stateful_set.go:148] Starting stateful set controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.267207 859533 shared_informer.go:240] Waiting for caches to sync for stateful set
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.269312 859533 controllermanager.go:577] Started "csrcleaner"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.269415 859533 cleaner.go:82] Starting CSR cleaner controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.272213 859533 controllermanager.go:577] Started "clusterrole-aggregation"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.272342 859533 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.272357 859533 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.274665 859533 controllermanager.go:577] Started "root-ca-cert-publisher"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.274773 859533 publisher.go:107] Starting root CA certificate configmap publisher
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.274783 859533 shared_informer.go:240] Waiting for caches to sync for crt configmap
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.276964 859533 controllermanager.go:577] Started "endpoint"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.277072 859533 endpoints_controller.go:195] Starting endpoint controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.277085 859533 shared_informer.go:240] Waiting for caches to sync for endpoint
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.282278 859533 garbagecollector.go:142] Starting garbage collector controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.282290 859533 shared_informer.go:240] Waiting for caches to sync for garbage collector
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.282297 859533 controllermanager.go:577] Started "garbagecollector"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.282318 859533 graph_builder.go:289] GraphBuilder running
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.284378 859533 controllermanager.go:577] Started "job"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.284395 859533 job_controller.go:172] Starting job controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.284409 859533 shared_informer.go:240] Waiting for caches to sync for job
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.286333 859533 controllermanager.go:577] Started "ephemeral-volume"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.286441 859533 controller.go:170] Starting ephemeral volume controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.286454 859533 shared_informer.go:240] Waiting for caches to sync for ephemeral
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.288242 859533 controllermanager.go:577] Started "ttl"
Feb 04 06:29:50 delightful k3s[859533]: W0204 06:29:50.288255 859533 controllermanager.go:556] "cloud-node-lifecycle" is disabled
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.288356 859533 ttl_controller.go:121] Starting TTL controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.288366 859533 shared_informer.go:240] Waiting for caches to sync for TTL
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.290306 859533 controllermanager.go:577] Started "endpointslicemirroring"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.290400 859533 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.290408 859533 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.292640 859533 controllermanager.go:577] Started "csrapproving"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.292788 859533 certificate_controller.go:118] Starting certificate controller "csrapproving"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.292813 859533 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.301171 859533 controllermanager.go:577] Started "pv-protection"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.301274 859533 pv_protection_controller.go:83] Starting PV protection controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.301289 859533 shared_informer.go:240] Waiting for caches to sync for PV protection
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.303350 859533 controllermanager.go:577] Started "replicaset"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.303458 859533 replica_set.go:186] Starting replicaset controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.303469 859533 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321418 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321450 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321473 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for orders.acme.cert-manager.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321489 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for servicemonitors.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321506 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for virtualclusters.storage.loft.sh
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321523 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for issuers.cert-manager.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321547 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321573 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for engines.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321592 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicas.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321610 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321638 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backingimagemanagers.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321678 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321701 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321740 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321764 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for certificates.cert-manager.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321786 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for prometheusrules.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: W0204 06:29:50.321800 859533 shared_informer.go:494] resyncPeriod 19h20m33.817338232s is smaller than resyncCheckPeriod 23h46m37.82575613s and the informer has already started. Changing it to 23h46m37.82575613s
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321965 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.321986 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for alertmanagers.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322008 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for policyreports.wgpolicyk8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322027 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322047 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322071 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for thanosrulers.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322090 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for settings.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322113 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backingimagedatasources.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322137 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backuptargets.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322158 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for instancemanagers.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322178 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322199 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for challenges.acme.cert-manager.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322218 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322236 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podmonitors.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322257 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for engineimages.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: W0204 06:29:50.322267 859533 shared_informer.go:494] resyncPeriod 17h35m3.885818385s is smaller than resyncCheckPeriod 23h46m37.82575613s and the informer has already started. Changing it to 23h46m37.82575613s
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322345 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322372 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322399 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322418 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322442 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for sharedsecrets.storage.loft.sh
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322461 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for templateinstances.config.kiosk.sh
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322483 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backupvolumes.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322505 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumes.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322531 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322552 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for certificaterequests.cert-manager.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322577 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for prometheuses.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322598 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322619 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322641 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322675 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for nodes.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322697 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for sharemanagers.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322730 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322756 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322789 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backups.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322813 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for backingimages.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322863 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for probes.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322881 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322903 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322922 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for alertmanagerconfigs.monitoring.coreos.com
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322944 859533 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for recurringjobs.longhorn.io
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322956 859533 controllermanager.go:577] Started "resourcequota"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322972 859533 resource_quota_controller.go:273] Starting resource quota controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.322988 859533 shared_informer.go:240] Waiting for caches to sync for resource quota
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.323008 859533 resource_quota_monitor.go:304] QuotaMonitor running
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.331419 859533 controllermanager.go:577] Started "horizontalpodautoscaling"
Feb 04 06:29:50 delightful k3s[859533]: W0204 06:29:50.331437 859533 controllermanager.go:556] "tokencleaner" is disabled
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.331541 859533 horizontal.go:169] Starting HPA controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.331553 859533 shared_informer.go:240] Waiting for caches to sync for HPA
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.333711 859533 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.333726 859533 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.333758 859533 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.333982 859533 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.333999 859533 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334031 859533 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334292 859533 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334306 859533 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334324 859533 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334543 859533 controllermanager.go:577] Started "csrsigning"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334652 859533 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334670 859533 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.334703 859533 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336497 859533 node_lifecycle_controller.go:377] Sending events to api server.
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336615 859533 taint_manager.go:163] "Sending events to api server"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336674 859533 node_lifecycle_controller.go:505] Controller will reconcile labels.
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336693 859533 controllermanager.go:577] Started "nodelifecycle"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336777 859533 node_lifecycle_controller.go:539] Starting node controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.336791 859533 shared_informer.go:240] Waiting for caches to sync for taint
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.338695 859533 controllermanager.go:577] Started "persistentvolume-binder"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.338799 859533 pv_controller_base.go:308] Starting persistent volume controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.338807 859533 shared_informer.go:240] Waiting for caches to sync for persistent volume
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.340862 859533 controllermanager.go:577] Started "attachdetach"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.340900 859533 attach_detach_controller.go:328] Starting attach detach controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.340908 859533 shared_informer.go:240] Waiting for caches to sync for attach detach
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.360935 859533 shared_informer.go:247] Caches are synced for tokens
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.363406 859533 controllermanager.go:577] Started "ttl-after-finished"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.363446 859533 ttlafterfinished_controller.go:109] Starting TTL after finished controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.363453 859533 shared_informer.go:240] Waiting for caches to sync for TTL after finished
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.412851 859533 controllermanager.go:577] Started "daemonset"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.412897 859533 daemon_controller.go:284] Starting daemon sets controller
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.412904 859533 shared_informer.go:240] Waiting for caches to sync for daemon sets
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.463211 859533 controllermanager.go:577] Started "deployment"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.463258 859533 deployment_controller.go:153] "Starting controller" controller="deployment"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.463266 859533 shared_informer.go:240] Waiting for caches to sync for deployment
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.512456 859533 controllermanager.go:577] Started "cronjob"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.512524 859533 cronjob_controllerv2.go:126] "Starting cronjob controller v2"
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.512539 859533 shared_informer.go:240] Waiting for caches to sync for cronjob
Feb 04 06:29:50 delightful k3s[859533]: I0204 06:29:50.562278 859533 node_ipam_controller.go:91] Sending events to api server.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.574913 859533 range_allocator.go:82] Sending events to api server.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.575036 859533 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.575045 859533 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.575080 859533 controllermanager.go:577] Started "nodeipam"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.575150 859533 node_ipam_controller.go:154] Starting ipam controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.575164 859533 shared_informer.go:240] Waiting for caches to sync for node
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.577112 859533 controllermanager.go:577] Started "pvc-protection"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.577216 859533 pvc_protection_controller.go:110] "Starting PVC protection controller"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.577230 859533 shared_informer.go:240] Waiting for caches to sync for PVC protection
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.579063 859533 controllermanager.go:577] Started "podgc"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.579151 859533 gc_controller.go:89] Starting GC controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.579162 859533 shared_informer.go:240] Waiting for caches to sync for GC
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.595580 859533 controllermanager.go:577] Started "namespace"
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.595597 859533 controllermanager.go:556] "bootstrapsigner" is disabled
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.595603 859533 controllermanager.go:556] "service" is disabled
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.595608 859533 controllermanager.go:556] "route" is disabled
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.595650 859533 namespace_controller.go:200] Starting namespace controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.595663 859533 shared_informer.go:240] Waiting for caches to sync for namespace
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.597776 859533 controllermanager.go:577] Started "persistentvolume-expander"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.597860 859533 expand_controller.go:327] Starting expand controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.597872 859533 shared_informer.go:240] Waiting for caches to sync for expand
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.599738 859533 controllermanager.go:577] Started "endpointslice"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.599836 859533 endpointslice_controller.go:257] Starting endpoint slice controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.599848 859533 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.601659 859533 controllermanager.go:577] Started "replicationcontroller"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.601677 859533 replica_set.go:186] Starting replicationcontroller controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.601688 859533 shared_informer.go:240] Waiting for caches to sync for ReplicationController
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.603570 859533 controllermanager.go:577] Started "serviceaccount"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.604378 859533 serviceaccounts_controller.go:117] Starting service account controller
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.604398 859533 shared_informer.go:240] Waiting for caches to sync for service account
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.606255 859533 shared_informer.go:240] Waiting for caches to sync for resource quota
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.610112 859533 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capital" does not exist
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.610138 859533 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="delightful" does not exist
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.610395 859533 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="donkey" does not exist
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620112 859533 job_controller.go:406] enqueueing job kube-system/helm-install-longhorn
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620147 859533 job_controller.go:406] enqueueing job longhorn-system/daily-backups-27398820
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620185 859533 job_controller.go:406] enqueueing job longhorn-system/hourly-backups-27398820
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620206 859533 job_controller.go:406] enqueueing job longhorn-system/hourly-backups-27398880
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620215 859533 job_controller.go:406] enqueueing job longhorn-system/regular-snapshots-27398829
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620223 859533 job_controller.go:406] enqueueing job longhorn-system/regular-snapshots-27398833
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.620240 859533 job_controller.go:406] enqueueing job kilo/cert-gen
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.627480 859533 job_controller.go:406] enqueueing job longhorn-system/daily-backups-27398820
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.629899 859533 job_controller.go:406] enqueueing job longhorn-system/hourly-backups-27398820
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.630264 859533 job_controller.go:406] enqueueing job longhorn-system/regular-snapshots-27398829
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.631587 859533 shared_informer.go:247] Caches are synced for HPA
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.633769 859533 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.634046 859533 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.634324 859533 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.634738 859533 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.636758 859533 shared_informer.go:240] Waiting for caches to sync for garbage collector
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.636950 859533 shared_informer.go:247] Caches are synced for taint
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.636991 859533 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: west-eu:
Feb 04 06:30:00 delightful k3s[859533]: :HEL1-DC6
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.637118 859533 taint_manager.go:187] "Starting NoExecuteTaintManager"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.637276 859533 event.go:291] "Event occurred" object="capital" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node capital event: Registered Node capital in Controller"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.638886 859533 shared_informer.go:247] Caches are synced for persistent volume
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.641308 859533 shared_informer.go:247] Caches are synced for attach detach
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.642998 859533 event.go:291] "Event occurred" object="delightful" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node delightful event: Registered Node delightful in Controller"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.643014 859533 event.go:291] "Event occurred" object="donkey" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node donkey event: Registered Node donkey in Controller"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.643309 859533 event.go:291] "Event occurred" object="longhorn-system/instance-manager-r-7f16c7e4" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod longhorn-system/instance-manager-r-7f16c7e4"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.643331 859533 event.go:291] "Event occurred" object="longhorn-system/instance-manager-e-1959e7fd" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod longhorn-system/instance-manager-e-1959e7fd"
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.649054 859533 node_lifecycle_controller.go:1013] Missing timestamp for Node capital. Assuming now as a timestamp.
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.649310 859533 node_lifecycle_controller.go:1013] Missing timestamp for Node delightful. Assuming now as a timestamp.
Feb 04 06:30:00 delightful k3s[859533]: W0204 06:30:00.649350 859533 node_lifecycle_controller.go:1013] Missing timestamp for Node donkey. Assuming now as a timestamp.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.649371 859533 node_lifecycle_controller.go:1214] Controller detected that zone west-eu:
Feb 04 06:30:00 delightful k3s[859533]: :HEL1-DC6 is now in state Normal.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.649372 859533 event.go:291] "Event occurred" object="longhorn-system/instance-manager-r-8e812c86" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod longhorn-system/instance-manager-r-8e812c86"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.649389 859533 event.go:291] "Event occurred" object="longhorn-system/instance-manager-e-b7f4213c" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod longhorn-system/instance-manager-e-b7f4213c"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.656698 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.659089 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.664493 859533 shared_informer.go:247] Caches are synced for TTL after finished
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.667822 859533 shared_informer.go:247] Caches are synced for stateful set
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.673034 859533 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.675243 859533 shared_informer.go:247] Caches are synced for crt configmap
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.675251 859533 shared_informer.go:247] Caches are synced for node
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.675265 859533 range_allocator.go:172] Starting range CIDR allocator
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.675269 859533 shared_informer.go:240] Waiting for caches to sync for cidrallocator
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.675275 859533 shared_informer.go:247] Caches are synced for cidrallocator
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.677382 859533 shared_informer.go:247] Caches are synced for PVC protection
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.679515 859533 shared_informer.go:247] Caches are synced for GC
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.684795 859533 shared_informer.go:247] Caches are synced for job
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.686638 859533 shared_informer.go:247] Caches are synced for ephemeral
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.688410 859533 shared_informer.go:247] Caches are synced for TTL
Feb 04 06:30:00 delightful k3s[859533]: E0204 06:30:00.690747 859533 job_controller.go:1309] pods "regular-snapshots-27398833--1-nrcn2" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:00 delightful k3s[859533]: E0204 06:30:00.690766 859533 job_controller.go:441] Error syncing job: pods "regular-snapshots-27398833--1-nrcn2" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:00 delightful k3s[859533]: E0204 06:30:00.690748 859533 job_controller.go:1309] pods "hourly-backups-27398880--1-mm5hn" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.690790 859533 event.go:291] "Event occurred" object="longhorn-system/hourly-backups-27398880" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"hourly-backups-27398880--1-mm5hn\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.690800 859533 event.go:291] "Event occurred" object="longhorn-system/regular-snapshots-27398833" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"regular-snapshots-27398833--1-nrcn2\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.692877 859533 shared_informer.go:247] Caches are synced for certificate-csrapproving
Feb 04 06:30:00 delightful k3s[859533]: E0204 06:30:00.692891 859533 job_controller.go:441] Error syncing job: pods "hourly-backups-27398880--1-mm5hn" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.696355 859533 shared_informer.go:247] Caches are synced for namespace
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.698487 859533 shared_informer.go:247] Caches are synced for expand
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.701856 859533 shared_informer.go:247] Caches are synced for PV protection
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.704439 859533 shared_informer.go:247] Caches are synced for service account
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.712678 859533 shared_informer.go:247] Caches are synced for cronjob
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.712932 859533 shared_informer.go:247] Caches are synced for daemon sets
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.722430 859533 event.go:291] "Event occurred" object="longhorn-system/regular-snapshots" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="JobAlreadyActive" message="Not starting job because prior execution is running and concurrency policy is Forbid"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.722488 859533 event.go:291] "Event occurred" object="longhorn-system/hourly-backups" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="JobAlreadyActive" message="Not starting job because prior execution is running and concurrency policy is Forbid"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757575 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-136ad782-227a-4338-80fd-69447548861e" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-a7de89ba-3294-461f-9794-905048108b35") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757613 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-7c605cc7-3a1c-4441-9e26-b693d34bdeb9" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-7c605cc7-3a1c-4441-9e26-b693d34bdeb9") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757638 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-b911963c-a39c-4359-92cf-a12ab25f50ab" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-b911963c-a39c-4359-92cf-a12ab25f50ab") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757662 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-8b17bbfe-876b-450f-a7b5-3b486e5decc9" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-5626876f-8d04-4719-8a08-aa67baf582c4") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757687 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-4e58c945-1cf8-46e6-b057-138e0c91c538" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-f7d3e6ef-33ba-46c1-8b08-f9ffbe09a292") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757711 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-70e0ef53-8d76-40b3-b800-2119fa392de4" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-70e0ef53-8d76-40b3-b800-2119fa392de4") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757735 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-708df931-7555-4f28-bf4e-ea91297ddc6b" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-af50e6d3-82cd-4f42-a436-512f8130f94d") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757765 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-d0700616-6dd3-4aec-8646-8477f5420ca3" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-d0700616-6dd3-4aec-8646-8477f5420ca3") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.757801 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e4d71202-dce8-4464-9bbc-53391eaaf86b" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-e4d71202-dce8-4464-9bbc-53391eaaf86b") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.759553 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-d0700616-6dd3-4aec-8646-8477f5420ca3" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-d0700616-6dd3-4aec-8646-8477f5420ca3") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.759969 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-70e0ef53-8d76-40b3-b800-2119fa392de4" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-70e0ef53-8d76-40b3-b800-2119fa392de4") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760022 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-708df931-7555-4f28-bf4e-ea91297ddc6b" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-af50e6d3-82cd-4f42-a436-512f8130f94d") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760249 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-8b17bbfe-876b-450f-a7b5-3b486e5decc9" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-5626876f-8d04-4719-8a08-aa67baf582c4") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760289 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e4d71202-dce8-4464-9bbc-53391eaaf86b" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-e4d71202-dce8-4464-9bbc-53391eaaf86b") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760493 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-b911963c-a39c-4359-92cf-a12ab25f50ab" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-b911963c-a39c-4359-92cf-a12ab25f50ab") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760510 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-7c605cc7-3a1c-4441-9e26-b693d34bdeb9" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-7c605cc7-3a1c-4441-9e26-b693d34bdeb9") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760734 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-136ad782-227a-4338-80fd-69447548861e" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-a7de89ba-3294-461f-9794-905048108b35") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.760883 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-4e58c945-1cf8-46e6-b057-138e0c91c538" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-f7d3e6ef-33ba-46c1-8b08-f9ffbe09a292") on node "delightful"
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.764161 859533 shared_informer.go:247] Caches are synced for deployment
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.765233 859533 shared_informer.go:247] Caches are synced for disruption
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.765248 859533 disruption.go:371] Sending events to api server.
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.801891 859533 shared_informer.go:247] Caches are synced for ReplicationController
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.804034 859533 shared_informer.go:247] Caches are synced for ReplicaSet
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.877531 859533 shared_informer.go:247] Caches are synced for endpoint
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.887017 859533 controller.go:611] quota admission added evaluator for: endpoints
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.890470 859533 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.899891 859533 shared_informer.go:247] Caches are synced for endpoint_slice
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.909673 859533 shared_informer.go:247] Caches are synced for resource quota
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.917450 859533 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
Feb 04 06:30:00 delightful k3s[859533]: I0204 06:30:00.923139 859533 shared_informer.go:247] Caches are synced for resource quota
Feb 04 06:30:01 delightful k3s[859533]: E0204 06:30:01.210209 859533 csi_attacher.go:726] kubernetes.io/csi: detachment for VolumeAttachment for volume [pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1] failed: rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:01 delightful k3s[859533]: E0204 06:30:01.210257 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-02-04 06:30:01.710246046 +0100 CET m=+37.268385639 (durationBeforeRetry 500ms). Error: DetachVolume.Detach failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful" : rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:01 delightful k3s[859533]: I0204 06:30:01.654868 859533 request.go:665] Waited for 1.017987232s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/storage.loft.sh/v1/virtualclustertemplates?limit=500&resourceVersion=0
Feb 04 06:30:01 delightful k3s[859533]: I0204 06:30:01.776557 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:01 delightful k3s[859533]: I0204 06:30:01.778603 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:02 delightful k3s[859533]: E0204 06:30:02.331037 859533 csi_attacher.go:726] kubernetes.io/csi: detachment for VolumeAttachment for volume [pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1] failed: rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:02 delightful k3s[859533]: E0204 06:30:02.331094 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-02-04 06:30:03.3310773 +0100 CET m=+38.889216923 (durationBeforeRetry 1s). Error: DetachVolume.Detach failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful" : rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:02 delightful k3s[859533]: I0204 06:30:02.383020 859533 shared_informer.go:247] Caches are synced for garbage collector
Feb 04 06:30:02 delightful k3s[859533]: I0204 06:30:02.383039 859533 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Feb 04 06:30:02 delightful k3s[859533]: I0204 06:30:02.436882 859533 shared_informer.go:247] Caches are synced for garbage collector
Feb 04 06:30:03 delightful k3s[859533]: I0204 06:30:03.394539 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:03 delightful k3s[859533]: I0204 06:30:03.396382 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:03 delightful k3s[859533]: E0204 06:30:03.969287 859533 csi_attacher.go:726] kubernetes.io/csi: detachment for VolumeAttachment for volume [pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1] failed: rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:03 delightful k3s[859533]: E0204 06:30:03.969345 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-02-04 06:30:05.969327859 +0100 CET m=+41.527467472 (durationBeforeRetry 2s). Error: DetachVolume.Detach failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful" : rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:06 delightful k3s[859533]: I0204 06:30:06.029838 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:06 delightful k3s[859533]: I0204 06:30:06.033185 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:06 delightful k3s[859533]: E0204 06:30:06.569137 859533 csi_attacher.go:726] kubernetes.io/csi: detachment for VolumeAttachment for volume [pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1] failed: rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:06 delightful k3s[859533]: E0204 06:30:06.569183 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-02-04 06:30:10.569168353 +0100 CET m=+46.127307956 (durationBeforeRetry 4s). Error: DetachVolume.Detach failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful" : rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:10 delightful k3s[859533]: I0204 06:30:10.672559 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:10 delightful k3s[859533]: I0204 06:30:10.674667 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful"
Feb 04 06:30:10 delightful k3s[859533]: E0204 06:30:10.694352 859533 job_controller.go:1309] pods "regular-snapshots-27398833--1-67xzt" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:10 delightful k3s[859533]: E0204 06:30:10.694378 859533 job_controller.go:441] Error syncing job: pods "regular-snapshots-27398833--1-67xzt" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:10 delightful k3s[859533]: I0204 06:30:10.694420 859533 event.go:291] "Event occurred" object="longhorn-system/regular-snapshots-27398833" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"regular-snapshots-27398833--1-67xzt\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:10 delightful k3s[859533]: E0204 06:30:10.695176 859533 job_controller.go:1309] pods "hourly-backups-27398880--1-sj4xw" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:10 delightful k3s[859533]: I0204 06:30:10.695206 859533 event.go:291] "Event occurred" object="longhorn-system/hourly-backups-27398880" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"hourly-backups-27398880--1-sj4xw\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:10 delightful k3s[859533]: E0204 06:30:10.696519 859533 job_controller.go:441] Error syncing job: pods "hourly-backups-27398880--1-sj4xw" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:11 delightful k3s[859533]: E0204 06:30:11.218531 859533 csi_attacher.go:726] kubernetes.io/csi: detachment for VolumeAttachment for volume [pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1] failed: rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:11 delightful k3s[859533]: E0204 06:30:11.218593 859533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-02-04 06:30:19.218572919 +0100 CET m=+54.776712532 (durationBeforeRetry 8s). Error: DetachVolume.Detach failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "delightful" : rpc error: code = Unavailable desc = transport is closing
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.604+0100","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"689da5af1f9c0825 switched to configuration voters=(7538363522856257573) learners=(17170586952230770711)"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.604+0100","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5d2267386b95c0fc","local-member-id":"689da5af1f9c0825","added-peer-id":"ee4a2ddc00ac1817","added-peer-peer-urls":["https://192.168.100.3:2380"]}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.604+0100","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.604+0100","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817","remote-peer-urls":["https://192.168.100.3:2380"]}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.605+0100","caller":"etcdserver/server.go:1907","msg":"applied a configuration change through raft","local-member-id":"689da5af1f9c0825","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.664+0100","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"689da5af1f9c0825","to":"ee4a2ddc00ac1817","stream-type":"stream Message"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.664+0100","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.664+0100","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.665+0100","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"689da5af1f9c0825","to":"ee4a2ddc00ac1817","stream-type":"stream MsgApp v2"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.665+0100","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.710+0100","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:11 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:11.711+0100","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:12 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:12.027+0100","caller":"etcdserver/server.go:2075","msg":"sending merged snapshot","from":"689da5af1f9c0825","to":"ee4a2ddc00ac1817","bytes":78041332,"size":"78 MB"}
Feb 04 06:30:12 delightful k3s[859533]: {"level":"info","ts":"2022-02-04T06:30:12.027+0100","caller":"rafthttp/snapshot_sender.go:84","msg":"sending database snapshot","snapshot-index":11326588,"remote-peer-id":"ee4a2ddc00ac1817","bytes":78041332,"size":"78 MB"}
Feb 04 06:30:12 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:12.592+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011fcc40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Feb 04 06:30:14 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:14.593+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011fcc40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Feb 04 06:30:19 delightful k3s[859533]: I0204 06:30:19.226171 859533 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "donkey"
Feb 04 06:30:19 delightful k3s[859533]: I0204 06:30:19.227397 859533 operation_generator.go:1578] Verified volume is safe to detach for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") on node "donkey"
Feb 04 06:30:27 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:27.592+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011fcc40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Feb 04 06:30:29 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:29.593+0100","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011fcc40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Feb 04 06:30:29 delightful k3s[859533]: time="2022-02-04T06:30:29+01:00" level=warning msg="Learner capital-83c092df stalled at RaftAppliedIndex=0 for 15.59295171s"
Feb 04 06:30:30 delightful k3s[859533]: E0204 06:30:30.699989 859533 job_controller.go:1309] pods "regular-snapshots-27398833--1-bvfbf" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:30 delightful k3s[859533]: E0204 06:30:30.700019 859533 job_controller.go:441] Error syncing job: pods "regular-snapshots-27398833--1-bvfbf" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:30 delightful k3s[859533]: I0204 06:30:30.700031 859533 event.go:291] "Event occurred" object="longhorn-system/regular-snapshots-27398833" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"regular-snapshots-27398833--1-bvfbf\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:30 delightful k3s[859533]: E0204 06:30:30.700299 859533 job_controller.go:1309] pods "hourly-backups-27398880--1-xc4hw" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:30 delightful k3s[859533]: I0204 06:30:30.700336 859533 event.go:291] "Event occurred" object="longhorn-system/hourly-backups-27398880" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"hourly-backups-27398880--1-xc4hw\" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0"
Feb 04 06:30:30 delightful k3s[859533]: E0204 06:30:30.702042 859533 job_controller.go:441] Error syncing job: pods "hourly-backups-27398880--1-xc4hw" is forbidden: exceeded quota: sleep-mode-quota-xl2xg, requested: pods=1, used: pods=8, limited: pods=0
Feb 04 06:30:32 delightful k3s[859533]: I0204 06:30:32.738260 859533 scope.go:110] "RemoveContainer" containerID="fbbe126e1ffda3acc633d1a9eb5a0cdaa5a36786763e1e025a67b23eff86e7f2"
Feb 04 06:30:32 delightful k3s[859533]: I0204 06:30:32.749215 859533 scope.go:110] "RemoveContainer" containerID="5c693300bd939cb3b939ca8414f5f103642f892c3123c70ff198daa61a520b7c"
Feb 04 06:30:32 delightful k3s[859533]: I0204 06:30:32.761831 859533 scope.go:110] "RemoveContainer" containerID="4e2523dcfa0716f6661c509d7bd8adbf1a521bf11cfc1fa7198d1ccf89aab750"
Feb 04 06:30:35 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:35.962+0100","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817","error":"unexpected EOF"}
Feb 04 06:30:35 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:35.962+0100","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817","error":"unexpected EOF"}
Feb 04 06:30:35 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:35.962+0100","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ee4a2ddc00ac1817","error":"failed to read ee4a2ddc00ac1817 on stream Message (unexpected EOF)"}
Feb 04 06:30:37 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:37.008+0100","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"689da5af1f9c0825","remote-peer-id":"ee4a2ddc00ac1817"}
Feb 04 06:30:39 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:39.025+0100","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.100.3:2380/version","remote-member-id":"ee4a2ddc00ac1817","error":"Get \"https://192.168.100.3:2380/version\": dial tcp 192.168.100.3:2380: connect: connection refused"}
Feb 04 06:30:39 delightful k3s[859533]: {"level":"warn","ts":"2022-02-04T06:30:39.025+0100","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"ee4a2ddc00ac1817","error":"Get \"https://192.168.100.3:2380/version\": dial tcp 192.168.100.3:2380: connect: connection refused"}
Feb 04 06:30:41 delightful k3s[859533]: I0204 06:30:41.349470 859533 network_policy_controller.go:166] Shutting down network policies full sync goroutine
Feb 04 06:30:41 delightful systemd[1]: Stopping Lightweight Kubernetes...
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Deactivated successfully.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 1485 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 1657 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 1720 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 1871 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 2014 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 2127 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 3571 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 6233 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 6262 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 194823 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 195011 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 195285 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 195467 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 195551 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 195759 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 196482 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 197143 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 197603 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 197760 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 197908 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 198214 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 198653 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 199441 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 199501 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 200668 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 200920 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 201104 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 201501 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 203439 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 206727 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Unit process 224118 (containerd-shim) remains running after unit stopped.
Feb 04 06:30:41 delightful systemd[1]: Stopped Lightweight Kubernetes.
Feb 04 06:30:41 delightful systemd[1]: k3s.service: Consumed 51.822s CPU time.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment