Skip to content

Instantly share code, notes, and snippets.

@soapergem
Created March 10, 2022 01:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save soapergem/fd71b313473aa366e5bb44acdaccaab4 to your computer and use it in GitHub Desktop.
Save soapergem/fd71b313473aa366e5bb44acdaccaab4 to your computer and use it in GitHub Desktop.
Container Logs
==> etcd-k8s-master-1_kube-system_etcd-b62a2fe7b6cd9e3e28743b45dbe618b1806a78d32600ebc5617e2315a591cd98.log <==
2022-03-09T19:07:33.410526355-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:07:33.410Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:07:31.885Z","time spent":"1.524853649s","remote":"127.0.0.1:59618","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":27,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
2022-03-09T19:07:43.369155936-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
2022-03-09T19:07:43.369294141-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.368Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
2022-03-09T19:07:43.375239203-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: addrConn.createTransport failed to connect to {192.168.1.194:2379 192.168.1.194:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.1.194:2379: connect: connection refused". Reconnecting...
2022-03-09T19:07:43.439678957-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2022-03-09T19:07:43.441263093-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2022-03-09T19:07:43.493443518-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.493Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c09b36eb34b96c2","current-leader-member-id":"6c09b36eb34b96c2"}
2022-03-09T19:07:43.983317461-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.983Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:07:44.047237861-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.984Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:07:44.047405843-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.984Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:07:48.12809293-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c09b36eb34b96c2 received MsgVoteResp from 6c09b36eb34b96c2 at term 3"}
2022-03-09T19:07:48.128294561-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c09b36eb34b96c2 became leader at term 3"}
2022-03-09T19:07:48.128422024-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c09b36eb34b96c2 elected leader 6c09b36eb34b96c2 at term 3"}
2022-03-09T19:07:48.129507603-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
2022-03-09T19:07:48.129596807-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
2022-03-09T19:07:48.133061896-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"6c09b36eb34b96c2","local-member-attributes":"{Name:k8s-master-1 ClientURLs:[https://192.168.1.194:2379]}","request-path":"/0/members/6c09b36eb34b96c2/attributes","cluster-id":"b0faae249263c59a","publish-timeout":"7s"}
2022-03-09T19:07:48.135276035-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.134Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.1.194:2379"}
2022-03-09T19:07:48.14097654-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.135Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
2022-03-09T19:07:48.14105367-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.135Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
2022-03-09T19:07:48.14108367-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:08:06.326396479-06:00 stderr F E0310 01:08:06.326261 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
2022-03-09T19:08:06.327399983-06:00 stderr F E0310 01:08:06.327206 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
2022-03-09T19:08:06.32943527-06:00 stderr F E0310 01:08:06.329197 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
2022-03-09T19:08:06.33288497-06:00 stderr F E0310 01:08:06.332646 1 timeout.go:137] post-timeout activity - time-elapsed: 37.093046ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-1" result: <nil>
2022-03-09T19:08:06.348910001-06:00 stderr F I0310 01:08:06.348648 1 shared_informer.go:247] Caches are synced for crd-autoregister
2022-03-09T19:08:07.029352367-06:00 stderr F I0310 01:08:07.029130 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2022-03-09T19:08:07.029521535-06:00 stderr F I0310 01:08:07.029286 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2022-03-09T19:08:07.05656189-06:00 stderr F I0310 01:08:07.056353 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
2022-03-09T19:08:09.501995595-06:00 stderr F I0310 01:08:09.501754 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
2022-03-09T19:08:23.468813772-06:00 stderr F I0310 01:08:23.468508 1 controller.go:611] quota admission added evaluator for: serviceaccounts
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-878abcff7e10ca1736a3dd0c6f4f6a08fbc06b459e323d7da7b09d931e4468c9.log <==
2022-03-09T19:07:48.381478577-06:00 stderr F E0310 01:07:48.381325 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
2022-03-09T19:07:48.381575818-06:00 stderr F E0310 01:07:48.381394 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
2022-03-09T19:07:48.381648226-06:00 stderr F E0310 01:07:48.381490 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 57.185µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
2022-03-09T19:07:48.382664897-06:00 stderr F E0310 01:07:48.382436 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
2022-03-09T19:07:48.384748906-06:00 stderr F I0310 01:07:48.384537 1 trace.go:205] Trace[1277373930]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:5f3e4998-b515-4649-896f-756afc2fd07d,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:07:43.417) (total time: 4967ms):
2022-03-09T19:07:48.384816221-06:00 stderr F Trace[1277373930]: [4.967406932s] [4.967406932s] END
2022-03-09T19:07:48.499000463-06:00 stderr F E0310 01:07:48.498667 1 timeout.go:137] post-timeout activity - time-elapsed: 115.711767ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result: <nil>
2022-03-09T19:07:49.17126318-06:00 stderr F I0310 01:07:49.171024 1 trace.go:205] Trace[329883806]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.23.4 (linux/arm64) kubernetes/e6c093d,audit-id:663095a3-faf4-4a14-bf19-56b8174b1aba,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Mar-2022 01:07:43.395) (total time: 5775ms):
2022-03-09T19:07:49.713298067-06:00 stderr F Trace[329883806]: ---"Object stored in database" 5773ms (01:07:49.170)
2022-03-09T19:07:49.713367827-06:00 stderr F Trace[329883806]: [5.775387816s] [5.775387816s] END
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-a5464556416f1ac5cecf70ea5a645839e1881b3830dc5efb167ae2c797068763.log <==
2022-03-09T19:08:23.91354679-06:00 stderr F I0310 01:08:23.913488 1 shared_informer.go:247] Caches are synced for token_cleaner
2022-03-09T19:08:23.964369524-06:00 stderr F I0310 01:08:23.964154 1 controllermanager.go:605] Started "cronjob"
2022-03-09T19:08:23.964500766-06:00 stderr F I0310 01:08:23.964240 1 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
2022-03-09T19:08:23.964593951-06:00 stderr F I0310 01:08:23.964505 1 shared_informer.go:240] Waiting for caches to sync for cronjob
2022-03-09T19:08:24.118497049-06:00 stderr F I0310 01:08:24.118247 1 controllermanager.go:605] Started "ttl"
2022-03-09T19:08:24.118813606-06:00 stderr F I0310 01:08:24.118688 1 ttl_controller.go:121] Starting TTL controller
2022-03-09T19:08:24.118862662-06:00 stderr F I0310 01:08:24.118730 1 shared_informer.go:240] Waiting for caches to sync for TTL
2022-03-09T19:08:24.265679248-06:00 stderr F I0310 01:08:24.265463 1 controllermanager.go:605] Started "bootstrapsigner"
2022-03-09T19:08:24.265790415-06:00 stderr F I0310 01:08:24.265632 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
2022-03-09T19:08:24.318524158-06:00 stderr F I0310 01:08:24.318312 1 node_ipam_controller.go:91] Sending events to api server.
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-fedfac4fb0548a1bdb9f0eacbc6affd4710153efed8dd82f5fc7d72564a08223.log <==
2022-03-09T19:07:36.806505139-06:00 stderr F I0310 01:07:36.806432 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
2022-03-09T19:07:36.80663151-06:00 stderr F I0310 01:07:36.806539 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
2022-03-09T19:07:36.807178957-06:00 stderr F I0310 01:07:36.806650 1 controllermanager.go:605] Started "resourcequota"
2022-03-09T19:07:36.810387822-06:00 stderr F I0310 01:07:36.809892 1 resource_quota_controller.go:273] Starting resource quota controller
2022-03-09T19:07:36.810466137-06:00 stderr F I0310 01:07:36.809973 1 shared_informer.go:240] Waiting for caches to sync for resource quota
2022-03-09T19:07:36.813088963-06:00 stderr F I0310 01:07:36.812847 1 resource_quota_monitor.go:308] QuotaMonitor running
2022-03-09T19:07:36.880038209-06:00 stderr F I0310 01:07:36.870143 1 controllermanager.go:605] Started "statefulset"
2022-03-09T19:07:36.880116098-06:00 stderr F I0310 01:07:36.870371 1 stateful_set.go:147] Starting stateful set controller
2022-03-09T19:07:36.880185673-06:00 stderr F I0310 01:07:36.870406 1 shared_informer.go:240] Waiting for caches to sync for stateful set
2022-03-09T19:07:36.928114746-06:00 stderr F I0310 01:07:36.927771 1 node_ipam_controller.go:91] Sending events to api server.
==> kube-scheduler-k8s-master-1_kube-system_kube-scheduler-4e035976cbca8e9671beffcf5fce2a66a75d593197af6b39ee077ca52f3064a2.log <==
2022-03-09T19:07:56.161012412-06:00 stderr F W0310 01:07:56.160639 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.1.194:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.161060672-06:00 stderr F E0310 01:07:56.160828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.194:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.1614906-06:00 stderr F E0310 01:07:56.161310 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.1.194:6443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.162984569-06:00 stderr F W0310 01:07:56.162607 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.1.194:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.163108514-06:00 stderr F E0310 01:07:56.162890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.194:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:08:06.391140217-06:00 stderr F E0310 01:08:06.390806 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
2022-03-09T19:08:06.391239754-06:00 stderr F E0310 01:08:06.390921 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
2022-03-09T19:08:06.449270611-06:00 stderr F I0310 01:08:06.448960 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
2022-03-09T19:08:06.545250055-06:00 stderr F I0310 01:08:06.544937 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2022-03-09T19:08:22.09765121-06:00 stderr F I0310 01:08:22.097410 1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler
==> kube-scheduler-k8s-master-1_kube-system_kube-scheduler-64deb99a93630005e45293e52b649a783f0445d395d3767e8af50eca6d23d341.log <==
2022-03-09T19:07:34.57525352-06:00 stderr F W0310 01:07:34.575012 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2022-03-09T19:07:34.57826494-06:00 stderr F E0310 01:07:34.575074 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2022-03-09T19:07:40.268319077-06:00 stderr F I0310 01:07:40.266842 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
2022-03-09T19:07:40.290261577-06:00 stderr F I0310 01:07:40.290060 1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler
2022-03-09T19:07:40.85941641-06:00 stderr F I0310 01:07:40.859206 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2022-03-09T19:07:43.37033583-06:00 stderr F I0310 01:07:43.369554 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
2022-03-09T19:07:43.385480339-06:00 stderr F I0310 01:07:43.385240 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
2022-03-09T19:07:43.39947162-06:00 stderr F I0310 01:07:43.398796 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
2022-03-09T19:07:48.389394259-06:00 stderr F E0310 01:07:48.389114 1 leaderelection.go:306] Failed to release lock: Put "https://192.168.1.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2022-03-09T19:07:48.389519666-06:00 stderr F I0310 01:07:48.389396 1 server.go:204] "Requested to terminate, exiting"
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-a5464556416f1ac5cecf70ea5a645839e1881b3830dc5efb167ae2c797068763.log <==
2022-03-09T19:08:34.432949472-06:00 stderr F I0310 01:08:34.432742 1 range_allocator.go:83] Sending events to api server.
2022-03-09T19:08:34.433487363-06:00 stderr F I0310 01:08:34.433360 1 range_allocator.go:117] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
2022-03-09T19:08:34.43394018-06:00 stderr F I0310 01:08:34.433674 1 controllermanager.go:605] Started "nodeipam"
2022-03-09T19:08:34.434035439-06:00 stderr F I0310 01:08:34.433787 1 node_ipam_controller.go:154] Starting ipam controller
2022-03-09T19:08:34.434099162-06:00 stderr F I0310 01:08:34.433840 1 shared_informer.go:240] Waiting for caches to sync for node
2022-03-09T19:08:34.440201021-06:00 stderr F I0310 01:08:34.440000 1 node_lifecycle_controller.go:77] Sending events to api server
2022-03-09T19:08:34.440372503-06:00 stderr F E0310 01:08:34.440171 1 core.go:212] failed to start cloud node lifecycle controller: no cloud provider provided
2022-03-09T19:08:34.440413652-06:00 stderr F W0310 01:08:34.440249 1 controllermanager.go:583] Skipping "cloud-node-lifecycle"
2022-03-09T19:08:34.461711816-06:00 stderr F I0310 01:08:34.460488 1 controllermanager.go:605] Started "attachdetach"
2022-03-09T19:08:34.462504283-06:00 stderr F I0310 01:08:34.460925 1 attach_detach_controller.go:328] Starting attach detach controller
2022-03-09T19:08:34.462580227-06:00 stderr F I0310 01:08:34.462359 1 shared_informer.go:240] Waiting for caches to sync for attach detach
2022-03-09T19:08:34.499841682-06:00 stderr F I0310 01:08:34.499513 1 garbagecollector.go:146] Starting garbage collector controller
2022-03-09T19:08:34.500143757-06:00 stderr F I0310 01:08:34.499985 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2022-03-09T19:08:34.500270258-06:00 stderr F I0310 01:08:34.500164 1 graph_builder.go:289] GraphBuilder running
2022-03-09T19:08:34.500296239-06:00 stderr F I0310 01:08:34.499636 1 controllermanager.go:605] Started "garbagecollector"
2022-03-09T19:08:34.535907354-06:00 stderr F I0310 01:08:34.535691 1 controllermanager.go:605] Started "horizontalpodautoscaling"
2022-03-09T19:08:34.536014225-06:00 stderr F I0310 01:08:34.535771 1 horizontal.go:168] Starting HPA controller
2022-03-09T19:08:34.536033484-06:00 stderr F I0310 01:08:34.535794 1 shared_informer.go:240] Waiting for caches to sync for HPA
2022-03-09T19:08:34.560818293-06:00 stderr F E0310 01:08:34.560539 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
2022-03-09T19:08:34.560947905-06:00 stderr F W0310 01:08:34.560614 1 controllermanager.go:583] Skipping "service"
2022-03-09T19:08:34.561534852-06:00 stderr F W0310 01:08:34.561323 1 core.go:226] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
2022-03-09T19:08:34.561653463-06:00 stderr F W0310 01:08:34.561382 1 controllermanager.go:583] Skipping "route"
2022-03-09T19:08:34.586547347-06:00 stderr F I0310 01:08:34.586299 1 controllermanager.go:605] Started "endpointslice"
2022-03-09T19:08:34.587405202-06:00 stderr F I0310 01:08:34.586593 1 endpointslice_controller.go:257] Starting endpoint slice controller
2022-03-09T19:08:34.588162131-06:00 stderr F I0310 01:08:34.587960 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
2022-03-09T19:08:34.612134215-06:00 stderr F I0310 01:08:34.611911 1 controllermanager.go:605] Started "replicaset"
2022-03-09T19:08:34.614922634-06:00 stderr F I0310 01:08:34.611969 1 replica_set.go:186] Starting replicaset controller
2022-03-09T19:08:34.615064042-06:00 stderr F I0310 01:08:34.614747 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
2022-03-09T19:08:34.658892691-06:00 stderr F I0310 01:08:34.658678 1 controllermanager.go:605] Started "podgc"
2022-03-09T19:08:34.659044451-06:00 stderr F I0310 01:08:34.658960 1 gc_controller.go:89] Starting GC controller
2022-03-09T19:08:34.659158211-06:00 stderr F I0310 01:08:34.659057 1 shared_informer.go:240] Waiting for caches to sync for GC
2022-03-09T19:08:34.671137632-06:00 stderr F I0310 01:08:34.670787 1 controllermanager.go:605] Started "job"
2022-03-09T19:08:34.671771783-06:00 stderr F I0310 01:08:34.670952 1 job_controller.go:186] Starting job controller
2022-03-09T19:08:34.671930395-06:00 stderr F I0310 01:08:34.671735 1 shared_informer.go:240] Waiting for caches to sync for job
2022-03-09T19:08:34.743268735-06:00 stderr F I0310 01:08:34.743038 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
2022-03-09T19:08:34.74339105-06:00 stderr F I0310 01:08:34.743273 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
2022-03-09T19:08:34.743576755-06:00 stderr F I0310 01:08:34.743477 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps
2022-03-09T19:08:34.743748126-06:00 stderr F W0310 01:08:34.743594 1 shared_informer.go:504] resyncPeriod 14h21m6.14775104s is smaller than resyncCheckPeriod 14h56m58.733105155s and the informer has already started. Changing it to 14h56m58.733105155s
2022-03-09T19:08:34.744017312-06:00 stderr F I0310 01:08:34.743893 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts
2022-03-09T19:08:34.744142665-06:00 stderr F I0310 01:08:34.744064 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
2022-03-09T19:08:34.744327702-06:00 stderr F I0310 01:08:34.744238 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
2022-03-09T19:08:34.744429351-06:00 stderr F I0310 01:08:34.744356 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
2022-03-09T19:08:34.744699889-06:00 stderr F I0310 01:08:34.744596 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
2022-03-09T19:08:34.744968705-06:00 stderr F I0310 01:08:34.744883 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
2022-03-09T19:08:34.745166836-06:00 stderr F I0310 01:08:34.745068 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
2022-03-09T19:08:34.74538867-06:00 stderr F I0310 01:08:34.745292 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch
2022-03-09T19:08:34.745609208-06:00 stderr F I0310 01:08:34.745508 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
2022-03-09T19:08:34.745807579-06:00 stderr F I0310 01:08:34.745712 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
2022-03-09T19:08:34.746029135-06:00 stderr F I0310 01:08:34.745933 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
2022-03-09T19:08:34.74648336-06:00 stderr F I0310 01:08:34.746370 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints
2022-03-09T19:08:34.746708472-06:00 stderr F I0310 01:08:34.746611 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch
2022-03-09T19:08:34.746907287-06:00 stderr F I0310 01:08:34.746810 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
2022-03-09T19:08:34.747139048-06:00 stderr F I0310 01:08:34.747043 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
2022-03-09T19:08:34.747318122-06:00 stderr F I0310 01:08:34.747223 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
2022-03-09T19:08:34.747581216-06:00 stderr F I0310 01:08:34.747473 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
2022-03-09T19:08:34.747859236-06:00 stderr F I0310 01:08:34.747722 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
2022-03-09T19:08:34.748104478-06:00 stderr F I0310 01:08:34.748003 1 controllermanager.go:605] Started "resourcequota"
2022-03-09T19:08:34.748390738-06:00 stderr F I0310 01:08:34.748114 1 resource_quota_controller.go:273] Starting resource quota controller
2022-03-09T19:08:34.748455535-06:00 stderr F I0310 01:08:34.748338 1 shared_informer.go:240] Waiting for caches to sync for resource quota
2022-03-09T19:08:34.748596202-06:00 stderr F I0310 01:08:34.748440 1 resource_quota_monitor.go:308] QuotaMonitor running
2022-03-09T19:08:34.763125838-06:00 stderr F I0310 01:08:34.762893 1 node_lifecycle_controller.go:377] Sending events to api server.
2022-03-09T19:08:34.764396973-06:00 stderr F I0310 01:08:34.764179 1 taint_manager.go:163] "Sending events to api server"
2022-03-09T19:08:34.765604719-06:00 stderr F I0310 01:08:34.765415 1 node_lifecycle_controller.go:505] Controller will reconcile labels.
2022-03-09T19:08:34.765938868-06:00 stderr F I0310 01:08:34.765812 1 controllermanager.go:605] Started "nodelifecycle"
2022-03-09T19:08:34.766170073-06:00 stderr F I0310 01:08:34.766047 1 node_lifecycle_controller.go:539] Starting node controller
2022-03-09T19:08:34.766244666-06:00 stderr F I0310 01:08:34.766084 1 shared_informer.go:240] Waiting for caches to sync for taint
2022-03-09T19:08:34.788586742-06:00 stderr F I0310 01:08:34.788357 1 controllermanager.go:605] Started "csrcleaner"
2022-03-09T19:08:34.788849521-06:00 stderr F I0310 01:08:34.788725 1 cleaner.go:82] Starting CSR cleaner controller
2022-03-09T19:08:34.798794989-06:00 stderr F I0310 01:08:34.798588 1 controllermanager.go:605] Started "root-ca-cert-publisher"
2022-03-09T19:08:34.79904925-06:00 stderr F I0310 01:08:34.798905 1 publisher.go:107] Starting root CA certificate configmap publisher
2022-03-09T19:08:34.799101917-06:00 stderr F I0310 01:08:34.798948 1 shared_informer.go:240] Waiting for caches to sync for crt configmap
2022-03-09T19:08:34.852210439-06:00 stderr F I0310 01:08:34.851972 1 controllermanager.go:605] Started "replicationcontroller"
2022-03-09T19:08:34.852328921-06:00 stderr F I0310 01:08:34.852081 1 replica_set.go:186] Starting replicationcontroller controller
2022-03-09T19:08:34.852489217-06:00 stderr F I0310 01:08:34.852379 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
2022-03-09T19:08:34.899989679-06:00 stderr F I0310 01:08:34.899720 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
2022-03-09T19:08:34.900090809-06:00 stderr F I0310 01:08:34.899778 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
2022-03-09T19:08:34.900379458-06:00 stderr F I0310 01:08:34.900216 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
2022-03-09T19:08:34.901482055-06:00 stderr F I0310 01:08:34.901297 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
2022-03-09T19:08:34.901747538-06:00 stderr F I0310 01:08:34.901656 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
2022-03-09T19:08:34.901857279-06:00 stderr F I0310 01:08:34.901548 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
2022-03-09T19:08:34.903467527-06:00 stderr F I0310 01:08:34.903283 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
2022-03-09T19:08:34.903541231-06:00 stderr F I0310 01:08:34.903334 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
2022-03-09T19:08:34.903706602-06:00 stderr F I0310 01:08:34.903604 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
2022-03-09T19:08:34.904992885-06:00 stderr F I0310 01:08:34.904789 1 controllermanager.go:605] Started "csrsigning"
2022-03-09T19:08:34.90521683-06:00 stderr F I0310 01:08:34.905119 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
2022-03-09T19:08:34.905748777-06:00 stderr F I0310 01:08:34.904857 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
2022-03-09T19:08:34.905844574-06:00 stderr F I0310 01:08:34.905679 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
2022-03-09T19:08:34.937880636-06:00 stderr F I0310 01:08:34.937601 1 shared_informer.go:240] Waiting for caches to sync for resource quota
2022-03-09T19:08:34.95565236-06:00 stderr F W0310 01:08:34.955344 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-1" does not exist
2022-03-09T19:08:34.95661303-06:00 stderr F I0310 01:08:34.956386 1 shared_informer.go:247] Caches are synced for node
2022-03-09T19:08:34.956864105-06:00 stderr F I0310 01:08:34.956733 1 range_allocator.go:173] Starting range CIDR allocator
2022-03-09T19:08:34.957368941-06:00 stderr F I0310 01:08:34.957222 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
2022-03-09T19:08:34.957455867-06:00 stderr F I0310 01:08:34.957391 1 shared_informer.go:247] Caches are synced for cidrallocator
2022-03-09T19:08:34.966918667-06:00 stderr F I0310 01:08:34.966678 1 shared_informer.go:247] Caches are synced for bootstrap_signer
2022-03-09T19:08:34.96712376-06:00 stderr F I0310 01:08:34.967009 1 shared_informer.go:247] Caches are synced for TTL after finished
2022-03-09T19:08:34.972262838-06:00 stderr F I0310 01:08:34.972018 1 shared_informer.go:247] Caches are synced for job
2022-03-09T19:08:34.997999984-06:00 stderr F I0310 01:08:34.997707 1 shared_informer.go:247] Caches are synced for PVC protection
2022-03-09T19:08:35.000378901-06:00 stderr F I0310 01:08:35.000116 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
2022-03-09T19:08:35.00247541-06:00 stderr F I0310 01:08:35.002218 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
2022-03-09T19:08:35.006163574-06:00 stderr F I0310 01:08:35.005897 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
2022-03-09T19:08:35.008341824-06:00 stderr F I0310 01:08:35.002772 1 range_allocator.go:374] Set node k8s-master-1 PodCIDR to [10.244.0.0/24]
2022-03-09T19:08:35.010909391-06:00 stderr F I0310 01:08:35.004447 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
2022-03-09T19:08:35.012623639-06:00 stderr F I0310 01:08:35.008716 1 shared_informer.go:247] Caches are synced for crt configmap
2022-03-09T19:08:35.01544154-06:00 stderr F I0310 01:08:35.015215 1 shared_informer.go:247] Caches are synced for ReplicaSet
2022-03-09T19:08:35.020328579-06:00 stderr F I0310 01:08:35.020077 1 shared_informer.go:247] Caches are synced for TTL
2022-03-09T19:08:35.026451364-06:00 stderr F I0310 01:08:35.026226 1 shared_informer.go:247] Caches are synced for deployment
2022-03-09T19:08:35.028152186-06:00 stderr F I0310 01:08:35.027900 1 shared_informer.go:247] Caches are synced for service account
2022-03-09T19:08:35.032660446-06:00 stderr F I0310 01:08:35.032417 1 shared_informer.go:247] Caches are synced for ephemeral
2022-03-09T19:08:35.034514028-06:00 stderr F I0310 01:08:35.034156 1 shared_informer.go:247] Caches are synced for endpoint
2022-03-09T19:08:35.039524142-06:00 stderr F I0310 01:08:35.039167 1 shared_informer.go:247] Caches are synced for HPA
2022-03-09T19:08:35.040253108-06:00 stderr F I0310 01:08:35.039976 1 shared_informer.go:247] Caches are synced for disruption
2022-03-09T19:08:35.04046459-06:00 stderr F I0310 01:08:35.040048 1 disruption.go:371] Sending events to api server.
2022-03-09T19:08:35.043241324-06:00 stderr F I0310 01:08:35.042884 1 shared_informer.go:247] Caches are synced for stateful set
2022-03-09T19:08:35.048179456-06:00 stderr F I0310 01:08:35.047959 1 shared_informer.go:247] Caches are synced for namespace
2022-03-09T19:08:35.050137224-06:00 stderr F I0310 01:08:35.049704 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
2022-03-09T19:08:35.052906699-06:00 stderr F I0310 01:08:35.052625 1 shared_informer.go:247] Caches are synced for ReplicationController
2022-03-09T19:08:35.060020859-06:00 stderr F I0310 01:08:35.059260 1 shared_informer.go:247] Caches are synced for GC
2022-03-09T19:08:35.060186026-06:00 stderr F I0310 01:08:35.059780 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2022-03-09T19:08:35.065945643-06:00 stderr F I0310 01:08:35.065729 1 shared_informer.go:247] Caches are synced for cronjob
2022-03-09T19:08:35.066708257-06:00 stderr F I0310 01:08:35.066530 1 shared_informer.go:247] Caches are synced for taint
2022-03-09T19:08:35.066861832-06:00 stderr F I0310 01:08:35.066773 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
2022-03-09T19:08:35.075073515-06:00 stderr F I0310 01:08:35.074741 1 event.go:294] "Event occurred" object="k8s-master-1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node k8s-master-1 event: Registered Node k8s-master-1 in Controller"
2022-03-09T19:08:35.075203775-06:00 stderr F I0310 01:08:35.074875 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone:
2022-03-09T19:08:35.075290405-06:00 stderr F W0310 01:08:35.075060 1 node_lifecycle_controller.go:1012] Missing timestamp for Node k8s-master-1. Assuming now as a timestamp.
2022-03-09T19:08:35.07540672-06:00 stderr F I0310 01:08:35.075185 1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
2022-03-09T19:08:35.075583184-06:00 stderr F I0310 01:08:35.075438 1 shared_informer.go:247] Caches are synced for daemon sets
2022-03-09T19:08:35.088954926-06:00 stderr F I0310 01:08:35.088672 1 shared_informer.go:247] Caches are synced for endpoint_slice
2022-03-09T19:08:35.099801787-06:00 stderr F I0310 01:08:35.099517 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
2022-03-09T19:08:35.101834462-06:00 stderr F I0310 01:08:35.101591 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
2022-03-09T19:08:35.138816601-06:00 stderr F I0310 01:08:35.138247 1 shared_informer.go:247] Caches are synced for resource quota
2022-03-09T19:08:35.149352942-06:00 stderr F I0310 01:08:35.149093 1 shared_informer.go:247] Caches are synced for resource quota
2022-03-09T19:08:35.163725744-06:00 stderr F I0310 01:08:35.163555 1 shared_informer.go:247] Caches are synced for attach detach
2022-03-09T19:08:35.175673017-06:00 stderr F I0310 01:08:35.175474 1 shared_informer.go:247] Caches are synced for expand
2022-03-09T19:08:35.17870503-06:00 stderr F I0310 01:08:35.175844 1 shared_informer.go:247] Caches are synced for persistent volume
2022-03-09T19:08:35.224968282-06:00 stderr F I0310 01:08:35.224752 1 shared_informer.go:247] Caches are synced for PV protection
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:08:35.691681635-06:00 stderr F I0310 01:08:35.690660 1 controller.go:611] quota admission added evaluator for: endpoints
2022-03-09T19:08:35.731271915-06:00 stderr F I0310 01:08:35.731045 1 controller.go:611] quota admission added evaluator for: replicasets.apps
2022-03-09T19:08:35.819196085-06:00 stderr F I0310 01:08:35.818963 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
2022-03-09T19:08:35.93597034-06:00 stderr F I0310 01:08:35.935702 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-a5464556416f1ac5cecf70ea5a645839e1881b3830dc5efb167ae2c797068763.log <==
2022-03-09T19:08:35.600988157-06:00 stderr F I0310 01:08:35.600785 1 shared_informer.go:247] Caches are synced for garbage collector
2022-03-09T19:08:35.601090473-06:00 stderr F I0310 01:08:35.600856 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2022-03-09T19:08:35.66093456-06:00 stderr F I0310 01:08:35.660721 1 shared_informer.go:247] Caches are synced for garbage collector
2022-03-09T19:08:35.749514974-06:00 stderr F I0310 01:08:35.743405 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
2022-03-09T19:08:35.838420426-06:00 stderr F I0310 01:08:35.838190 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ztn28"
2022-03-09T19:08:36.076091824-06:00 stderr F I0310 01:08:36.075701 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-chvkk"
2022-03-09T19:08:36.185337159-06:00 stderr F I0310 01:08:36.185030 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-h2cqv"
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:08:39.902588828-06:00 stderr F I0310 01:08:39.902301 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
^C
root@k8s-master-1:/var/log/containers# tail -f *.log
==> etcd-k8s-master-1_kube-system_etcd-b62a2fe7b6cd9e3e28743b45dbe618b1806a78d32600ebc5617e2315a591cd98.log <==
2022-03-09T19:07:33.410526355-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:07:33.410Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:07:31.885Z","time spent":"1.524853649s","remote":"127.0.0.1:59618","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":27,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
2022-03-09T19:07:43.369155936-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
2022-03-09T19:07:43.369294141-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.368Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
2022-03-09T19:07:43.375239203-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: addrConn.createTransport failed to connect to {192.168.1.194:2379 192.168.1.194:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.1.194:2379: connect: connection refused". Reconnecting...
2022-03-09T19:07:43.439678957-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2022-03-09T19:07:43.441263093-06:00 stderr F WARNING: 2022/03/10 01:07:43 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2022-03-09T19:07:43.493443518-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.493Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c09b36eb34b96c2","current-leader-member-id":"6c09b36eb34b96c2"}
2022-03-09T19:07:43.983317461-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.983Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:07:44.047237861-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.984Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:07:44.047405843-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:43.984Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:07:48.12809293-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c09b36eb34b96c2 received MsgVoteResp from 6c09b36eb34b96c2 at term 3"}
2022-03-09T19:07:48.128294561-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c09b36eb34b96c2 became leader at term 3"}
2022-03-09T19:07:48.128422024-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c09b36eb34b96c2 elected leader 6c09b36eb34b96c2 at term 3"}
2022-03-09T19:07:48.129507603-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
2022-03-09T19:07:48.129596807-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
2022-03-09T19:07:48.133061896-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.129Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"6c09b36eb34b96c2","local-member-attributes":"{Name:k8s-master-1 ClientURLs:[https://192.168.1.194:2379]}","request-path":"/0/members/6c09b36eb34b96c2/attributes","cluster-id":"b0faae249263c59a","publish-timeout":"7s"}
2022-03-09T19:07:48.135276035-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.134Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.1.194:2379"}
2022-03-09T19:07:48.14097654-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.135Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
2022-03-09T19:07:48.14105367-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.135Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
2022-03-09T19:07:48.14108367-06:00 stderr F {"level":"info","ts":"2022-03-10T01:07:48.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:08:07.029352367-06:00 stderr F I0310 01:08:07.029130 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2022-03-09T19:08:07.029521535-06:00 stderr F I0310 01:08:07.029286 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2022-03-09T19:08:07.05656189-06:00 stderr F I0310 01:08:07.056353 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
2022-03-09T19:08:09.501995595-06:00 stderr F I0310 01:08:09.501754 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
2022-03-09T19:08:23.468813772-06:00 stderr F I0310 01:08:23.468508 1 controller.go:611] quota admission added evaluator for: serviceaccounts
2022-03-09T19:08:35.691681635-06:00 stderr F I0310 01:08:35.690660 1 controller.go:611] quota admission added evaluator for: endpoints
2022-03-09T19:08:35.731271915-06:00 stderr F I0310 01:08:35.731045 1 controller.go:611] quota admission added evaluator for: replicasets.apps
2022-03-09T19:08:35.819196085-06:00 stderr F I0310 01:08:35.818963 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
2022-03-09T19:08:35.93597034-06:00 stderr F I0310 01:08:35.935702 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
2022-03-09T19:08:39.902588828-06:00 stderr F I0310 01:08:39.902301 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-878abcff7e10ca1736a3dd0c6f4f6a08fbc06b459e323d7da7b09d931e4468c9.log <==
2022-03-09T19:07:48.381478577-06:00 stderr F E0310 01:07:48.381325 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
2022-03-09T19:07:48.381575818-06:00 stderr F E0310 01:07:48.381394 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
2022-03-09T19:07:48.381648226-06:00 stderr F E0310 01:07:48.381490 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 57.185µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
2022-03-09T19:07:48.382664897-06:00 stderr F E0310 01:07:48.382436 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
2022-03-09T19:07:48.384748906-06:00 stderr F I0310 01:07:48.384537 1 trace.go:205] Trace[1277373930]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:5f3e4998-b515-4649-896f-756afc2fd07d,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:07:43.417) (total time: 4967ms):
2022-03-09T19:07:48.384816221-06:00 stderr F Trace[1277373930]: [4.967406932s] [4.967406932s] END
2022-03-09T19:07:48.499000463-06:00 stderr F E0310 01:07:48.498667 1 timeout.go:137] post-timeout activity - time-elapsed: 115.711767ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result: <nil>
2022-03-09T19:07:49.17126318-06:00 stderr F I0310 01:07:49.171024 1 trace.go:205] Trace[329883806]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.23.4 (linux/arm64) kubernetes/e6c093d,audit-id:663095a3-faf4-4a14-bf19-56b8174b1aba,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Mar-2022 01:07:43.395) (total time: 5775ms):
2022-03-09T19:07:49.713298067-06:00 stderr F Trace[329883806]: ---"Object stored in database" 5773ms (01:07:49.170)
2022-03-09T19:07:49.713367827-06:00 stderr F Trace[329883806]: [5.775387816s] [5.775387816s] END
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-34c1e93a6cb35251ee21712dd8c9ad3a99b038ecf449603601fc29ef924eb8f4.log <==
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-a5464556416f1ac5cecf70ea5a645839e1881b3830dc5efb167ae2c797068763.log <==
2022-03-09T19:08:35.175673017-06:00 stderr F I0310 01:08:35.175474 1 shared_informer.go:247] Caches are synced for expand
2022-03-09T19:08:35.17870503-06:00 stderr F I0310 01:08:35.175844 1 shared_informer.go:247] Caches are synced for persistent volume
2022-03-09T19:08:35.224968282-06:00 stderr F I0310 01:08:35.224752 1 shared_informer.go:247] Caches are synced for PV protection
2022-03-09T19:08:35.600988157-06:00 stderr F I0310 01:08:35.600785 1 shared_informer.go:247] Caches are synced for garbage collector
2022-03-09T19:08:35.601090473-06:00 stderr F I0310 01:08:35.600856 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2022-03-09T19:08:35.66093456-06:00 stderr F I0310 01:08:35.660721 1 shared_informer.go:247] Caches are synced for garbage collector
2022-03-09T19:08:35.749514974-06:00 stderr F I0310 01:08:35.743405 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
2022-03-09T19:08:35.838420426-06:00 stderr F I0310 01:08:35.838190 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ztn28"
2022-03-09T19:08:36.076091824-06:00 stderr F I0310 01:08:36.075701 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-chvkk"
2022-03-09T19:08:36.185337159-06:00 stderr F I0310 01:08:36.185030 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-h2cqv"
==> kube-proxy-ztn28_kube-system_kube-proxy-c744c2853b7260b18b9154feeab1fbfec5ba43c76fa4196cfec6c49d03bed0d1.log <==
2022-03-09T19:08:39.881395794-06:00 stderr F I0310 01:08:39.881311 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
2022-03-09T19:08:39.882851411-06:00 stderr F I0310 01:08:39.882675 1 server.go:656] "Version info" version="v1.23.4"
2022-03-09T19:08:39.886304277-06:00 stderr F W0310 01:08:39.886066 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
2022-03-09T19:08:39.890496758-06:00 stderr F I0310 01:08:39.890268 1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
2022-03-09T19:08:39.89178406-06:00 stderr F I0310 01:08:39.891585 1 config.go:226] "Starting endpoint slice config controller"
2022-03-09T19:08:39.891954505-06:00 stderr F I0310 01:08:39.891737 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2022-03-09T19:08:39.891992857-06:00 stderr F I0310 01:08:39.891918 1 config.go:317] "Starting service config controller"
2022-03-09T19:08:39.892073543-06:00 stderr F I0310 01:08:39.892010 1 shared_informer.go:240] Waiting for caches to sync for service config
2022-03-09T19:08:39.993211806-06:00 stderr F I0310 01:08:39.992957 1 shared_informer.go:247] Caches are synced for service config
2022-03-09T19:08:39.993295399-06:00 stderr F I0310 01:08:39.992957 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy-ztn28_kube-system_kube-proxy-ddfb6e6b4932102ed65b2d5ef8614ee734d1e8c74affb65d36013b47cf2250ef.log <==
==> kube-scheduler-k8s-master-1_kube-system_kube-scheduler-4e035976cbca8e9671beffcf5fce2a66a75d593197af6b39ee077ca52f3064a2.log <==
2022-03-09T19:07:56.161012412-06:00 stderr F W0310 01:07:56.160639 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.1.194:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.161060672-06:00 stderr F E0310 01:07:56.160828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.194:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.1614906-06:00 stderr F E0310 01:07:56.161310 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.1.194:6443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.162984569-06:00 stderr F W0310 01:07:56.162607 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.1.194:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:07:56.163108514-06:00 stderr F E0310 01:07:56.162890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.194:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.194:6443: connect: connection refused
2022-03-09T19:08:06.391140217-06:00 stderr F E0310 01:08:06.390806 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
2022-03-09T19:08:06.391239754-06:00 stderr F E0310 01:08:06.390921 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
2022-03-09T19:08:06.449270611-06:00 stderr F I0310 01:08:06.448960 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
2022-03-09T19:08:06.545250055-06:00 stderr F I0310 01:08:06.544937 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2022-03-09T19:08:22.09765121-06:00 stderr F I0310 01:08:22.097410 1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler
==> kube-scheduler-k8s-master-1_kube-system_kube-scheduler-64deb99a93630005e45293e52b649a783f0445d395d3767e8af50eca6d23d341.log <==
2022-03-09T19:07:34.57525352-06:00 stderr F W0310 01:07:34.575012 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2022-03-09T19:07:34.57826494-06:00 stderr F E0310 01:07:34.575074 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2022-03-09T19:07:40.268319077-06:00 stderr F I0310 01:07:40.266842 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
2022-03-09T19:07:40.290261577-06:00 stderr F I0310 01:07:40.290060 1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler
2022-03-09T19:07:40.85941641-06:00 stderr F I0310 01:07:40.859206 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2022-03-09T19:07:43.37033583-06:00 stderr F I0310 01:07:43.369554 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
2022-03-09T19:07:43.385480339-06:00 stderr F I0310 01:07:43.385240 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
2022-03-09T19:07:43.39947162-06:00 stderr F I0310 01:07:43.398796 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
2022-03-09T19:07:48.389394259-06:00 stderr F E0310 01:07:48.389114 1 leaderelection.go:306] Failed to release lock: Put "https://192.168.1.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2022-03-09T19:07:48.389519666-06:00 stderr F I0310 01:07:48.389396 1 server.go:204] "Requested to terminate, exiting"
==> kube-controller-manager-k8s-master-1_kube-system_kube-controller-manager-34c1e93a6cb35251ee21712dd8c9ad3a99b038ecf449603601fc29ef924eb8f4.log <==
2022-03-09T19:09:44.181598876-06:00 stderr F I0310 01:09:44.181246 1 serving.go:348] Generated self-signed cert in-memory
2022-03-09T19:09:45.730693519-06:00 stderr F I0310 01:09:45.730459 1 controllermanager.go:196] Version: v1.23.4
2022-03-09T19:09:45.734661832-06:00 stderr F I0310 01:09:45.734368 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2022-03-09T19:09:45.734779499-06:00 stderr F I0310 01:09:45.734382 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2022-03-09T19:09:45.73546178-06:00 stderr F I0310 01:09:45.735203 1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
2022-03-09T19:09:45.735544262-06:00 stderr F I0310 01:09:45.735346 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
2022-03-09T19:09:45.73612719-06:00 stderr F I0310 01:09:45.735955 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:10:11.505485465-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:10:11.505Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"162.976622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
2022-03-09T19:10:11.506024708-06:00 stderr F {"level":"info","ts":"2022-03-10T01:10:11.505Z","caller":"traceutil/trace.go:171","msg":"trace[237194416] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:613; }","duration":"163.787273ms","start":"2022-03-10T01:10:11.342Z","end":"2022-03-10T01:10:11.505Z","steps":["trace[237194416] 'count revisions from in-memory index tree' (duration: 162.834769ms)"],"step_count":1}
2022-03-09T19:11:58.670207811-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.669Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"777.873883ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10863385376057779582 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.1.194\" mod_revision:733 > success:<request_put:<key:\"/registry/masterleases/192.168.1.194\" value_size:68 lease:1640013339203003772 >> failure:<request_range:<key:\"/registry/masterleases/192.168.1.194\" > >>","response":"size:16"}
2022-03-09T19:11:58.670487793-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.670Z","caller":"traceutil/trace.go:171","msg":"trace[1685562934] transaction","detail":"{read_only:false; response_revision:744; number_of_response:1; }","duration":"888.473819ms","start":"2022-03-10T01:11:57.781Z","end":"2022-03-10T01:11:58.670Z","steps":["trace[1685562934] 'process raft request' (duration: 109.25443ms)","trace[1685562934] 'compare' (duration: 777.669882ms)"],"step_count":2}
2022-03-09T19:11:58.670941573-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.670Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"796.061109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/kube-controller-manager\" ","response":"range_response_count:1 size:511"}
2022-03-09T19:11:58.671118111-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.670Z","caller":"traceutil/trace.go:171","msg":"trace[510972765] range","detail":"{range_begin:/registry/leases/kube-system/kube-controller-manager; range_end:; response_count:1; response_revision:744; }","duration":"796.346499ms","start":"2022-03-10T01:11:57.874Z","end":"2022-03-10T01:11:58.670Z","steps":["trace[510972765] 'agreement among raft nodes before linearized reading' (duration: 795.718626ms)"],"step_count":1}
2022-03-09T19:11:58.671289223-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:57.874Z","time spent":"796.544963ms","remote":"127.0.0.1:60616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":534,"request content":"key:\"/registry/leases/kube-system/kube-controller-manager\" "}
2022-03-09T19:11:58.671498594-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.670Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:57.781Z","time spent":"888.767894ms","remote":"127.0.0.1:60562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.1.194\" mod_revision:733 > success:<request_put:<key:\"/registry/masterleases/192.168.1.194\" value_size:68 lease:1640013339203003772 >> failure:<request_range:<key:\"/registry/masterleases/192.168.1.194\" > >"}
2022-03-09T19:11:58.672065985-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.670Z","caller":"traceutil/trace.go:171","msg":"trace[1037396256] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:805; }","duration":"795.439458ms","start":"2022-03-10T01:11:57.874Z","end":"2022-03-10T01:11:58.670Z","steps":["trace[1037396256] 'read index received' (duration: 16.183791ms)","trace[1037396256] 'applied index is now lower than readState.Index' (duration: 779.251482ms)"],"step_count":2}
2022-03-09T19:11:58.673527806-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.673Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"369.868693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-proxy-ztn28.16dadfb925891bef\" ","response":"range_response_count:1 size:668"}
2022-03-09T19:11:58.673749437-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.673Z","caller":"traceutil/trace.go:171","msg":"trace[2026398274] range","detail":"{range_begin:/registry/events/kube-system/kube-proxy-ztn28.16dadfb925891bef; range_end:; response_count:1; response_revision:744; }","duration":"370.218399ms","start":"2022-03-10T01:11:58.303Z","end":"2022-03-10T01:11:58.673Z","steps":["trace[2026398274] 'agreement among raft nodes before linearized reading' (duration: 368.778004ms)"],"step_count":1}
2022-03-09T19:11:58.674052142-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:58.303Z","time spent":"370.51901ms","remote":"127.0.0.1:60566","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":691,"request content":"key:\"/registry/events/kube-system/kube-proxy-ztn28.16dadfb925891bef\" "}
2022-03-09T19:11:58.675869631-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.675Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"757.058979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/kube-scheduler\" ","response":"range_response_count:1 size:484"}
2022-03-09T19:11:58.676436078-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.676Z","caller":"traceutil/trace.go:171","msg":"trace[1682348267] range","detail":"{range_begin:/registry/leases/kube-system/kube-scheduler; range_end:; response_count:1; response_revision:744; }","duration":"760.629124ms","start":"2022-03-10T01:11:57.915Z","end":"2022-03-10T01:11:58.675Z","steps":["trace[1682348267] 'agreement among raft nodes before linearized reading' (duration: 756.98335ms)"],"step_count":1}
2022-03-09T19:11:58.677054896-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:57.915Z","time spent":"761.292997ms","remote":"127.0.0.1:60616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":507,"request content":"key:\"/registry/leases/kube-system/kube-scheduler\" "}
2022-03-09T19:11:58.677156044-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:58.676Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"649.693594ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
2022-03-09T19:11:58.677456601-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:58.677Z","caller":"traceutil/trace.go:171","msg":"trace[106996279] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:744; }","duration":"650.142411ms","start":"2022-03-10T01:11:58.026Z","end":"2022-03-10T01:11:58.677Z","steps":["trace[106996279] 'agreement among raft nodes before linearized reading' (duration: 649.556594ms)"],"step_count":1}
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:11:58.676563245-06:00 stderr F I0310 01:11:58.676335 1 trace.go:205] Trace[682018162]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2022 01:11:57.777) (total time: 899ms):
2022-03-09T19:11:58.676637431-06:00 stderr F Trace[682018162]: ---"Transaction committed" 895ms (01:11:58.676)
2022-03-09T19:11:58.676659764-06:00 stderr F Trace[682018162]: [899.184402ms] [899.184402ms] END
2022-03-09T19:11:58.677582176-06:00 stderr F I0310 01:11:58.677354 1 trace.go:205] Trace[51369510]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:e7716151-6c02-46be-9bf2-80a6bc25ad06,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:11:57.873) (total time: 803ms):
2022-03-09T19:11:58.677617824-06:00 stderr F Trace[51369510]: ---"About to write a response" 803ms (01:11:58.677)
2022-03-09T19:11:58.677639731-06:00 stderr F Trace[51369510]: [803.666494ms] [803.666494ms] END
2022-03-09T19:11:58.681064079-06:00 stderr F I0310 01:11:58.680601 1 trace.go:205] Trace[847817771]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:9ded3d9e-8f0a-4dfb-8bd3-58fe62b9e3ee,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:11:57.913) (total time: 766ms):
2022-03-09T19:11:58.681148987-06:00 stderr F Trace[847817771]: ---"About to write a response" 766ms (01:11:58.680)
2022-03-09T19:11:58.681169246-06:00 stderr F Trace[847817771]: [766.495965ms] [766.495965ms] END
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:11:59.195263853-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.194Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10863385376057779585,"retry-timeout":"500ms"}
2022-03-09T19:11:59.527126143-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.526Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.208311ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10863385376057779587 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-scheduler\" mod_revision:743 > success:<request_put:<key:\"/registry/leases/kube-system/kube-scheduler\" value_size:418 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-scheduler\" > >>","response":"size:16"}
2022-03-09T19:11:59.527704442-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:59.527Z","caller":"traceutil/trace.go:171","msg":"trace[434967320] transaction","detail":"{read_only:false; response_revision:746; number_of_response:1; }","duration":"824.374693ms","start":"2022-03-10T01:11:58.702Z","end":"2022-03-10T01:11:59.527Z","steps":["trace[434967320] 'process raft request' (duration: 716.665936ms)","trace[434967320] 'compare' (duration: 107.021217ms)"],"step_count":2}
2022-03-09T19:11:59.527947387-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.527Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"833.466287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:135"}
2022-03-09T19:11:59.528137258-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:59.527Z","caller":"traceutil/trace.go:171","msg":"trace[270155392] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:747; }","duration":"833.665733ms","start":"2022-03-10T01:11:58.694Z","end":"2022-03-10T01:11:59.527Z","steps":["trace[270155392] 'agreement among raft nodes before linearized reading' (duration: 833.273824ms)"],"step_count":1}
2022-03-09T19:11:59.528189444-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.527Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:58.702Z","time spent":"824.725547ms","remote":"127.0.0.1:60616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":469,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-scheduler\" mod_revision:743 > success:<request_put:<key:\"/registry/leases/kube-system/kube-scheduler\" value_size:418 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-scheduler\" > >"}
2022-03-09T19:11:59.528442575-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:59.527Z","caller":"traceutil/trace.go:171","msg":"trace[838696255] linearizableReadLoop","detail":"{readStateIndex:809; appliedIndex:807; }","duration":"833.144341ms","start":"2022-03-10T01:11:58.694Z","end":"2022-03-10T01:11:59.527Z","steps":["trace[838696255] 'read index received' (duration: 725.058286ms)","trace[838696255] 'applied index is now lower than readState.Index' (duration: 108.0835ms)"],"step_count":2}
2022-03-09T19:11:59.529009558-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:59.528Z","caller":"traceutil/trace.go:171","msg":"trace[1227934753] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"825.742773ms","start":"2022-03-10T01:11:58.702Z","end":"2022-03-10T01:11:59.528Z","steps":["trace[1227934753] 'process raft request' (duration: 824.018562ms)"],"step_count":1}
2022-03-09T19:11:59.529217319-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:58.702Z","time spent":"825.91533ms","remote":"127.0.0.1:60616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":496,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:742 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:436 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >"}
2022-03-09T19:11:59.529283448-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:11:59.528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:58.694Z","time spent":"833.795419ms","remote":"127.0.0.1:60562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
2022-03-09T19:11:59.629389432-06:00 stderr F {"level":"info","ts":"2022-03-10T01:11:59.629Z","caller":"traceutil/trace.go:171","msg":"trace[898472379] linearizableReadLoop","detail":"{readStateIndex:809; appliedIndex:809; }","duration":"100.417374ms","start":"2022-03-10T01:11:59.528Z","end":"2022-03-10T01:11:59.629Z","steps":["trace[898472379] 'read index received' (duration: 100.402207ms)","trace[898472379] 'applied index is now lower than readState.Index' (duration: 12.926µs)"],"step_count":2}
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:11:59.530799122-06:00 stderr F I0310 01:11:59.530591 1 trace.go:205] Trace[160922123]: "GuaranteedUpdate etcd3" type:*coordination.Lease (10-Mar-2022 01:11:58.699) (total time: 831ms):
2022-03-09T19:11:59.530892881-06:00 stderr F Trace[160922123]: ---"Transaction committed" 829ms (01:11:59.530)
2022-03-09T19:11:59.530918011-06:00 stderr F Trace[160922123]: [831.359482ms] [831.359482ms] END
2022-03-09T19:11:59.53096377-06:00 stderr F I0310 01:11:59.530591 1 trace.go:205] Trace[495937520]: "GuaranteedUpdate etcd3" type:*coordination.Lease (10-Mar-2022 01:11:58.700) (total time: 830ms):
2022-03-09T19:11:59.530982919-06:00 stderr F Trace[495937520]: ---"Transaction committed" 829ms (01:11:59.530)
2022-03-09T19:11:59.531061012-06:00 stderr F Trace[495937520]: [830.220903ms] [830.220903ms] END
2022-03-09T19:11:59.531731163-06:00 stderr F I0310 01:11:59.531414 1 trace.go:205] Trace[416857606]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:9a9a1d8a-eb5b-415a-8a62-55b94adbd54f,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:11:58.699) (total time: 831ms):
2022-03-09T19:11:59.531964978-06:00 stderr F Trace[416857606]: ---"Object stored in database" 830ms (01:11:59.531)
2022-03-09T19:11:59.532004664-06:00 stderr F Trace[416857606]: [831.330538ms] [831.330538ms] END
2022-03-09T19:11:59.532023127-06:00 stderr F I0310 01:11:59.531414 1 trace.go:205] Trace[335003711]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.23.4 (linux/arm64) kubernetes/e6c093d/leader-election,audit-id:dbd6d467-292d-4d1a-a1dd-068f14f8c48c,client:192.168.1.194,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:11:58.698) (total time: 833ms):
2022-03-09T19:11:59.532064071-06:00 stderr F Trace[335003711]: ---"Object stored in database" 831ms (01:11:59.530)
2022-03-09T19:11:59.532080627-06:00 stderr F Trace[335003711]: [833.119267ms] [833.119267ms] END
2022-03-09T19:11:59.532592481-06:00 stderr F I0310 01:11:59.532449 1 trace.go:205] Trace[1898123243]: "List etcd3" key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (10-Mar-2022 01:11:58.692) (total time: 839ms):
2022-03-09T19:11:59.53265537-06:00 stderr F Trace[1898123243]: [839.521332ms] [839.521332ms] END
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:12:00.224983387-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.224Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.451358487s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
2022-03-09T19:12:00.225283667-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:00.224Z","caller":"traceutil/trace.go:171","msg":"trace[1019312833] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:747; }","duration":"1.451550468s","start":"2022-03-10T01:11:58.773Z","end":"2022-03-10T01:12:00.224Z","steps":["trace[1019312833] 'agreement among raft nodes before linearized reading' (duration: 856.070958ms)","trace[1019312833] 'range keys from in-memory index tree' (duration: 595.256676ms)"],"step_count":2}
2022-03-09T19:12:00.225457797-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.224Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"724.400562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:1 size:3945"}
2022-03-09T19:12:00.225586909-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:00.225Z","caller":"traceutil/trace.go:171","msg":"trace[1996556273] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:747; }","duration":"724.832582ms","start":"2022-03-10T01:11:59.500Z","end":"2022-03-10T01:12:00.224Z","steps":["trace[1996556273] 'agreement among raft nodes before linearized reading' (duration: 129.025348ms)","trace[1996556273] 'range keys from in-memory index tree' (duration: 595.340047ms)"],"step_count":2}
2022-03-09T19:12:00.225636261-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.225Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"689.641542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:482"}
2022-03-09T19:12:00.22565715-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:00.225Z","caller":"traceutil/trace.go:171","msg":"trace[1308407249] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:747; }","duration":"689.790154ms","start":"2022-03-10T01:11:59.535Z","end":"2022-03-10T01:12:00.225Z","steps":["trace[1308407249] 'agreement among raft nodes before linearized reading' (duration: 94.09105ms)","trace[1308407249] 'range keys from in-memory index tree' (duration: 595.516399ms)"],"step_count":2}
2022-03-09T19:12:00.225674002-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.224Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:58.773Z","time spent":"1.451711154s","remote":"127.0.0.1:60600","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
2022-03-09T19:12:00.225731446-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:59.535Z","time spent":"689.908451ms","remote":"127.0.0.1:60620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":505,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
2022-03-09T19:12:00.225791965-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:00.225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:11:59.500Z","time spent":"725.061546ms","remote":"127.0.0.1:60584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3968,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:12:00.228185549-06:00 stderr F I0310 01:12:00.227976 1 trace.go:205] Trace[1389866858]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (10-Mar-2022 01:11:59.498) (total time: 727ms):
2022-03-09T19:12:00.228281809-06:00 stderr F Trace[1389866858]: [727.851947ms] [727.851947ms] END
2022-03-09T19:12:00.228864534-06:00 stderr F I0310 01:12:00.228649 1 trace.go:205] Trace[1639913485]: "Get" url:/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.23.4 (linux/arm64) kubernetes/e6c093d,audit-id:2c314b63-feef-40dc-bf28-c1bd30ab9ec0,client:::1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Mar-2022 01:11:59.534) (total time: 694ms):
2022-03-09T19:12:00.228955312-06:00 stderr F Trace[1639913485]: ---"About to write a response" 694ms (01:12:00.228)
2022-03-09T19:12:00.228974534-06:00 stderr F Trace[1639913485]: [694.307156ms] [694.307156ms] END
2022-03-09T19:12:00.230508281-06:00 stderr F I0310 01:12:00.230303 1 trace.go:205] Trace[947509676]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.23.4 (linux/arm64) kubernetes/e6c093d,audit-id:92591e15-c9d3-4be1-a4ff-c5f236dd626f,client:192.168.1.194,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (10-Mar-2022 01:11:59.498) (total time: 731ms):
2022-03-09T19:12:00.230600393-06:00 stderr F Trace[947509676]: ---"Listing from storage done" 729ms (01:12:00.228)
2022-03-09T19:12:00.230620541-06:00 stderr F Trace[947509676]: [731.310073ms] [731.310073ms] END
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:12:01.982552832-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:01.982Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"270.164193ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10863385376057779597 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-scheduler\" mod_revision:746 > success:<request_put:<key:\"/registry/leases/kube-system/kube-scheduler\" value_size:418 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-scheduler\" > >>","response":"size:16"}
2022-03-09T19:12:01.982733499-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:01.982Z","caller":"traceutil/trace.go:171","msg":"trace[1097713478] linearizableReadLoop","detail":"{readStateIndex:811; appliedIndex:810; }","duration":"209.885713ms","start":"2022-03-10T01:12:01.772Z","end":"2022-03-10T01:12:01.982Z","steps":["trace[1097713478] 'read index received' (duration: 208.515133ms)","trace[1097713478] 'applied index is now lower than readState.Index' (duration: 1.368228ms)"],"step_count":2}
2022-03-09T19:12:01.98277824-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:01.982Z","caller":"traceutil/trace.go:171","msg":"trace[1355655757] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"388.398662ms","start":"2022-03-10T01:12:01.594Z","end":"2022-03-10T01:12:01.982Z","steps":["trace[1355655757] 'process raft request' (duration: 117.79943ms)","trace[1355655757] 'compare' (duration: 269.926747ms)"],"step_count":2}
2022-03-09T19:12:01.983118168-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:01.982Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"210.099677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
2022-03-09T19:12:01.98331952-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:01.982Z","caller":"traceutil/trace.go:171","msg":"trace[1868452977] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:749; }","duration":"210.392752ms","start":"2022-03-10T01:12:01.772Z","end":"2022-03-10T01:12:01.982Z","steps":["trace[1868452977] 'agreement among raft nodes before linearized reading' (duration: 210.091269ms)"],"step_count":1}
2022-03-09T19:12:01.983371261-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:01.983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-10T01:12:01.594Z","time spent":"388.630551ms","remote":"127.0.0.1:60616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":469,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-scheduler\" mod_revision:746 > success:<request_put:<key:\"/registry/leases/kube-system/kube-scheduler\" value_size:418 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-scheduler\" > >"}
2022-03-09T19:12:01.983947875-06:00 stderr F {"level":"warn","ts":"2022-03-10T01:12:01.983Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"161.933285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
2022-03-09T19:12:01.985004287-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:01.984Z","caller":"traceutil/trace.go:171","msg":"trace[177241800] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:749; }","duration":"162.201119ms","start":"2022-03-10T01:12:01.821Z","end":"2022-03-10T01:12:01.983Z","steps":["trace[177241800] 'agreement among raft nodes before linearized reading' (duration: 161.22793ms)"],"step_count":1}
==> kube-scheduler-k8s-master-1_kube-system_kube-scheduler-4e035976cbca8e9671beffcf5fce2a66a75d593197af6b39ee077ca52f3064a2.log <==
2022-03-09T19:12:10.250944371-06:00 stderr F I0310 01:12:10.250685 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
2022-03-09T19:12:10.252209395-06:00 stderr F I0310 01:12:10.252002 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
2022-03-09T19:12:10.268287742-06:00 stderr F I0310 01:12:10.268059 1 server.go:204] "Requested to terminate, exiting"
==> kube-apiserver-k8s-master-1_kube-system_kube-apiserver-84f96b7b1a19199acd2b1b195a1080857a14700c9457aaefff82c2f2f2bb4d02.log <==
2022-03-09T19:12:22.254947639-06:00 stderr F I0310 01:12:22.249949 1 controller.go:186] Shutting down kubernetes service endpoint reconciler
2022-03-09T19:12:22.255104306-06:00 stderr F I0310 01:12:22.249990 1 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
2022-03-09T19:12:22.255125251-06:00 stderr F I0310 01:12:22.250826 1 storage_flowcontrol.go:150] APF bootstrap ensurer is exiting
2022-03-09T19:12:22.255147103-06:00 stderr F I0310 01:12:22.250873 1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
2022-03-09T19:12:22.255219807-06:00 stderr F I0310 01:12:22.251697 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
2022-03-09T19:12:22.255239473-06:00 stderr F I0310 01:12:22.251757 1 available_controller.go:503] Shutting down AvailableConditionController
2022-03-09T19:12:22.255255455-06:00 stderr F I0310 01:12:22.251795 1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
2022-03-09T19:12:22.255271492-06:00 stderr F I0310 01:12:22.251882 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
2022-03-09T19:12:22.255290918-06:00 stderr F I0310 01:12:22.251929 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
2022-03-09T19:12:22.25532764-06:00 stderr F I0310 01:12:22.251958 1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
2022-03-09T19:12:22.255344789-06:00 stderr F I0310 01:12:22.251987 1 establishing_controller.go:87] Shutting down EstablishingController
2022-03-09T19:12:22.255360807-06:00 stderr F I0310 01:12:22.252016 1 naming_controller.go:302] Shutting down NamingConditionController
2022-03-09T19:12:22.255376437-06:00 stderr F I0310 01:12:22.252046 1 controller.go:122] Shutting down OpenAPI controller
2022-03-09T19:12:22.255394993-06:00 stderr F I0310 01:12:22.252074 1 autoregister_controller.go:165] Shutting down autoregister controller
2022-03-09T19:12:22.255411215-06:00 stderr F I0310 01:12:22.252107 1 crd_finalizer.go:278] Shutting down CRDFinalizer
2022-03-09T19:12:22.255427048-06:00 stderr F I0310 01:12:22.253237 1 apf_controller.go:326] Shutting down API Priority and Fairness config worker
2022-03-09T19:12:22.255442919-06:00 stderr F I0310 01:12:22.253592 1 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2022-03-09T19:12:22.255462993-06:00 stderr F I0310 01:12:22.253911 1 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2022-03-09T19:12:22.255478586-06:00 stderr F I0310 01:12:22.254391 1 controller.go:89] Shutting down OpenAPI AggregationController
2022-03-09T19:12:22.255493975-06:00 stderr F I0310 01:12:22.254560 1 dynamic_serving_content.go:145] "Shutting down controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"
2022-03-09T19:12:22.255522938-06:00 stderr F I0310 01:12:22.254610 1 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2022-03-09T19:12:22.257231834-06:00 stderr F I0310 01:12:22.257072 1 secure_serving.go:311] Stopped listening on [::]:6443
2022-03-09T19:12:22.257355668-06:00 stderr F I0310 01:12:22.257239 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
2022-03-09T19:12:22.258068782-06:00 stderr F I0310 01:12:22.257391 1 dynamic_serving_content.go:145] "Shutting down controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"
2022-03-09T19:12:22.258153912-06:00 stderr F I0310 01:12:22.257511 1 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2022-03-09T19:12:22.265613055-06:00 stderr F E0310 01:12:22.265367 1 controller.go:189] Unable to remove endpoints from kubernetes service: Get "https://[::1]:6443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp [::1]:6443: connect: connection refused
==> etcd-k8s-master-1_kube-system_etcd-c119d9f093ca912b2c89edcf938aa7ebc96ad0253dfe74843d6ba3db5f37ce04.log <==
2022-03-09T19:12:23.250834937-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.250Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
2022-03-09T19:12:23.250962438-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.250Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
2022-03-09T19:12:23.26394429-06:00 stderr F WARNING: 2022/03/10 01:12:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2022-03-09T19:12:23.331486912-06:00 stderr F WARNING: 2022/03/10 01:12:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.1.194:2379 192.168.1.194:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.1.194:2379: connect: connection refused". Reconnecting...
2022-03-09T19:12:23.331992655-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.331Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c09b36eb34b96c2","current-leader-member-id":"6c09b36eb34b96c2"}
2022-03-09T19:12:23.336129358-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.335Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:12:23.337548068-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.337Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.1.194:2380"}
2022-03-09T19:12:23.337599679-06:00 stderr F {"level":"info","ts":"2022-03-10T01:12:23.337Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"k8s-master-1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.194:2380"],"advertise-client-urls":["https://192.168.1.194:2379"]}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment