Skip to content

Instantly share code, notes, and snippets.

@wviana
Created June 11, 2022 00:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wviana/978eccd402cb91601f83cb0235f80607 to your computer and use it in GitHub Desktop.
Save wviana/978eccd402cb91601f83cb0235f80607 to your computer and use it in GitHub Desktop.
K3s not starting up
systemd[1]: Started Lightweight Kubernetes.
s[1859051]: I0607 1859051 tlsconfig.go:240] "Starting DynamicServingCertificateController
s[1859051]: I0607 1859051 autoregister_controller.go:141] Starting autoregister controller
s[1859051]: I0607 1859051 controller.go:83] Starting OpenAPI AggregationController
s[1859051]: I0607 1859051 apf_controller.go:317] Starting API Priority and Fairness config controller
s[1859051]: I0607 1859051 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for autoregister controller
s[1859051]: I0607 1859051 apiservice_controller.go:97] Starting APIServiceRegistrationController
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
s[1859051]: I0607 1859051 available_controller.go:491] Starting AvailableConditionController
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
s[1859051]: I0607 1859051 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
s[1859051]: I0607 1859051 customresource_discovery_controller.go:209] Starting DiscoveryController
s[1859051]: I0607 1859051 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
s[1859051]: I0607 1859051 crdregistration_controller.go:111] Starting crd-autoregister controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
s[1859051]: I0607 1859051 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
s[1859051]: I0607 1859051 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
s[1859051]: I0607 1859051 controller.go:85] Starting OpenAPI controller
s[1859051]: I0607 1859051 naming_controller.go:291] Starting NamingConditionController
s[1859051]: I0607 1859051 establishing_controller.go:76] Starting EstablishingController
s[1859051]: I0607 1859051 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
s[1859051]: I0607 1859051 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
s[1859051]: I0607 1859051 crd_finalizer.go:266] Starting CRDFinalizer
s[1859051]: I0607 1859051 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for node_authorizer
s[1859051]: I0607 1859051 apf_controller.go:322] Running API Priority and Fairness config worker
s[1859051]: I0607 1859051 cache.go:39] Caches are synced for APIServiceRegistrationController controller
s[1859051]: I0607 1859051 cache.go:39] Caches are synced for AvailableConditionController controller
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for crd-autoregister
s[1859051]: I0607 1859051 cache.go:39] Caches are synced for autoregister controller
s[1859051]: E0607 1859051 controller.go:161] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
s[1859051]: I0607 1859051 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
s[1859051]: I0607 1859051 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
s[1859051]: I0607 1859051 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
s[1859051]: W0607 1859051 handler_proxy.go:104] no RequestInfo found in the context
s[1859051]: E0607 1859051 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
s[1859051]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
s[1859051]: I0607 1859051 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Kube API server is now running
s[1859051]: k3s is up and running
s[1859051]: Waiting for cloud-controller-manager privileges to become available
s[1859051]: Applying CRD addons.k3s.cattle.io
s[1859051]: Applying CRD helmcharts.helm.cattle.io
s[1859051]: Applying CRD helmchartconfigs.helm.cattle.io
s[1859051]: I0607 1859051 serving.go:355] Generated self-signed cert in-memory
s[1859051]: Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.19.300.tgz
s[1859051]: Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.19.300.tgz
s[1859051]: I0607 1859051 serving.go:355] Generated self-signed cert in-memory
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml
s[1859051]: Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: Starting k3s.cattle.io/v1, Kind=Addon controller
s[1859051]: Creating deploy event broadcaster
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"151fea2b-fe0b-44c4-a613-49a291ca7392\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\"
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 controllermanager.go:144] Version: v1.23.6+k3s1
s[1859051]: I0607 1859051 controllermanager.go:196] Version: v1.23.6+k3s1
s[1859051]: I0607 1859051 secure_serving.go:200] Serving securely on 127.0.0.1:10258
s[1859051]: I0607 1859051 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 tlsconfig.go:240] "Starting DynamicServingCertificateController
s[1859051]: I0607 1859051 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
s[1859051]: I0607 1859051 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: I0607 1859051 secure_serving.go:200] Serving securely on 127.0.0.1:10257
s[1859051]: I0607 1859051 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 tlsconfig.go:240] "Starting DynamicServingCertificateController
s[1859051]: I0607 1859051 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
s[1859051]: I0607 1859051 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: Creating svccontroller event broadcaster
s[1859051]: Starting /v1, Kind=Secret controller
s[1859051]: I0607 1859051 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
s[1859051]: Updating TLS secret for k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-91.148.141.102:91.148.141.102 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-<my-hostname>:<my-hostname> listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=3F82635DECCB55B08447046F68CFD763A5853D39]
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: Cluster dns configmap already exists
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"151fea2b-fe0b-44c4-a613-49a291ca7392\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\"
s[1859051]: I0607 1859051 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"fcecf86f-92ac-477a-a830-912080a90743\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"282\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"
s[1859051]: Starting helm.cattle.io/v1, Kind=HelmChart controller
s[1859051]: Starting helm.cattle.io/v1, Kind=HelmChartConfig controller
s[1859051]: Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"9ba6c795-0d9d-49a3-95d0-8b33a6e54192\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"404\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik
s[1859051]: Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"49e09f52-a1b5-4d23-b106-8ccdcce3726b\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"405\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd
s[1859051]: I0607 1859051 controller.go:611] quota admission added evaluator for: deployments.apps
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"fcecf86f-92ac-477a-a830-912080a90743\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"282\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"
s[1859051]: Starting apps/v1, Kind=DaemonSet controller
s[1859051]: Starting apps/v1, Kind=Deployment controller
s[1859051]: I0607 1859051 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"f7dfbb9f-3dbc-4217-8339-c6e1643dbfa4\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"299\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\"
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: Starting batch/v1, Kind=Job controller
s[1859051]: Starting /v1, Kind=Pod controller
s[1859051]: Starting /v1, Kind=Service controller
s[1859051]: Starting /v1, Kind=Endpoints controller
s[1859051]: Starting /v1, Kind=Node controller
s[1859051]: Starting /v1, Kind=ConfigMap controller
s[1859051]: Starting /v1, Kind=ServiceAccount controller
s[1859051]: Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"49e09f52-a1b5-4d23-b106-8ccdcce3726b\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"405\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd
s[1859051]: Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"9ba6c795-0d9d-49a3-95d0-8b33a6e54192\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"404\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"f7dfbb9f-3dbc-4217-8339-c6e1643dbfa4\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"299\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"bab3838b-0584-4f46-9d5d-29cf62736ddd\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"bab3838b-0584-4f46-9d5d-29cf62736ddd\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"65e4ff93-68e7-48ea-b6a1-42e8065abf45\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"65e4ff93-68e7-48ea-b6a1-42e8065abf45\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"31e4d54f-d02d-4086-8834-7e51333c255a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"31e4d54f-d02d-4086-8834-7e51333c255a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"5999e507-f8bb-4e22-b338-4e7d330d2f2b\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"327\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"5999e507-f8bb-4e22-b338-4e7d330d2f2b\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"327\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"bcbea840-fb6f-4f30-93c3-751be39b8df7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"338\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"bcbea840-fb6f-4f30-93c3-751be39b8df7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"338\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"df0dc5f7-d24e-4d43-9dee-75b6b2761f15\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"df0dc5f7-d24e-4d43-9dee-75b6b2761f15\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"386812cb-32ae-41d2-9cca-8f704ce733ba\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"363\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"386812cb-32ae-41d2-9cca-8f704ce733ba\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"363\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"e044e718-4097-409e-8650-55fe324eec9f\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"375\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"e044e718-4097-409e-8650-55fe324eec9f\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"375\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"2d78743a-499f-4ead-9a25-fda57575550c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"388\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\"
s[1859051]: Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"2d78743a-499f-4ead-9a25-fda57575550c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"388\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\"
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: I0607 1859051 request.go:665] Waited for 1.000071968s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/apiextensions.k8s.io/v1
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: E0607 1859051 controllermanager.go:479] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for tokens
s[1859051]: E0607 1859051 controllermanager.go:470] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 node_controller.go:116] Sending events to api server.
s[1859051]: I0607 1859051 controllermanager.go:298] Started "cloud-node
s[1859051]: I0607 1859051 node_lifecycle_controller.go:77] Sending events to api server
s[1859051]: I0607 1859051 controllermanager.go:298] Started "cloud-node-lifecycle
s[1859051]: I0607 1859051 node_controller.go:155] Waiting for informer caches to sync
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for tokens
s[1859051]: I0607 1859051 controllermanager.go:605] Started "horizontalpodautoscaling
s[1859051]: I0607 1859051 horizontal.go:168] Starting HPA controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for HPA
s[1859051]: I0607 1859051 controllermanager.go:605] Started "ttl
s[1859051]: W0607 1859051 controllermanager.go:570] "tokencleaner" is disabled
s[1859051]: W0607 1859051 controllermanager.go:570] "route" is disabled
s[1859051]: I0607 1859051 ttl_controller.go:121] Starting TTL controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for TTL
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: I0607 1859051 controllermanager.go:605] Started "endpointslice
s[1859051]: I0607 1859051 endpointslice_controller.go:257] Starting endpoint slice controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
s[1859051]: I0607 1859051 controllermanager.go:605] Started "endpointslicemirroring
s[1859051]: I0607 1859051 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: I0607 1859051 controllermanager.go:605] Started "csrapproving
s[1859051]: I0607 1859051 certificate_controller.go:118] Starting certificate controller "csrapproving
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
s[1859051]: I0607 1859051 node_lifecycle_controller.go:377] Sending events to api server.
s[1859051]: I0607 1859051 taint_manager.go:163] "Sending events to api server
s[1859051]: I0607 1859051 node_lifecycle_controller.go:505] Controller will reconcile labels.
s[1859051]: I0607 1859051 controllermanager.go:605] Started "nodelifecycle
s[1859051]: I0607 1859051 node_lifecycle_controller.go:539] Starting node controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for taint
s[1859051]: E0607 1859051 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 controllermanager.go:605] Started "namespace
s[1859051]: I0607 1859051 namespace_controller.go:200] Starting namespace controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for namespace
s[1859051]: I0607 1859051 controllermanager.go:605] Started "job
s[1859051]: W0607 1859051 controllermanager.go:570] "service" is disabled
s[1859051]: I0607 1859051 job_controller.go:184] Starting job controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for job
s[1859051]: I0607 1859051 controllermanager.go:605] Started "persistentvolume-binder
s[1859051]: I0607 1859051 pv_controller_base.go:310] Starting persistent volume controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for persistent volume
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: I0607 1859051 controllermanager.go:605] Started "clusterrole-aggregation
s[1859051]: I0607 1859051 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
s[1859051]: I0607 1859051 controllermanager.go:605] Started "pvc-protection
s[1859051]: I0607 1859051 pvc_protection_controller.go:103] "Starting PVC protection controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for PVC protection
s[1859051]: I0607 1859051 controllermanager.go:605] Started "pv-protection
s[1859051]: I0607 1859051 pv_protection_controller.go:79] Starting PV protection controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for PV protection
s[1859051]: I0607 1859051 controllermanager.go:605] Started "replicationcontroller
s[1859051]: I0607 1859051 replica_set.go:186] Starting replicationcontroller controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for ReplicationController
s[1859051]: I0607 1859051 controllermanager.go:605] Started "deployment
s[1859051]: W0607 1859051 controllermanager.go:570] "bootstrapsigner" is disabled
s[1859051]: I0607 1859051 deployment_controller.go:153] "Starting controller" controller="deployment
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for deployment
s[1859051]: I0607 1859051 node_ipam_controller.go:91] Sending events to api server.
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: I0607 1859051 range_allocator.go:83] Sending events to api server.
s[1859051]: I0607 1859051 range_allocator.go:111] No Service CIDR provided. Skipping filtering out service addresses.
s[1859051]: I0607 1859051 range_allocator.go:117] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
s[1859051]: I0607 1859051 controllermanager.go:605] Started "nodeipam
s[1859051]: W0607 1859051 controllermanager.go:570] "cloud-node-lifecycle" is disabled
s[1859051]: I0607 1859051 node_ipam_controller.go:154] Starting ipam controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for node
s[1859051]: I0607 1859051 controllermanager.go:605] Started "attachdetach
s[1859051]: I0607 1859051 attach_detach_controller.go:328] Starting attach detach controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for attach detach
s[1859051]: I0607 1859051 controllermanager.go:605] Started "ephemeral-volume
s[1859051]: I0607 1859051 controller.go:170] Starting ephemeral volume controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for ephemeral
s[1859051]: E0607 1859051 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch
s[1859051]: I0607 1859051 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
s[1859051]: I0607 1859051 controllermanager.go:605] Started "resourcequota
s[1859051]: I0607 1859051 resource_quota_controller.go:273] Starting resource quota controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for resource quota
s[1859051]: I0607 1859051 resource_quota_monitor.go:308] QuotaMonitor running
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: E0607 1859051 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 controllermanager.go:605] Started "garbagecollector
s[1859051]: I0607 1859051 garbagecollector.go:146] Starting garbage collector controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for garbage collector
s[1859051]: I0607 1859051 graph_builder.go:289] GraphBuilder running
s[1859051]: I0607 1859051 controllermanager.go:605] Started "disruption
s[1859051]: I0607 1859051 disruption.go:363] Starting disruption controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for disruption
s[1859051]: I0607 1859051 controllermanager.go:605] Started "csrcleaner
s[1859051]: I0607 1859051 cleaner.go:82] Starting CSR cleaner controller
s[1859051]: I0607 1859051 controllermanager.go:605] Started "endpoint
s[1859051]: I0607 1859051 endpoints_controller.go:193] Starting endpoint controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for endpoint
s[1859051]: I0607 1859051 controllermanager.go:605] Started "serviceaccount
s[1859051]: I0607 1859051 serviceaccounts_controller.go:117] Starting service account controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for service account
s[1859051]: I0607 1859051 controllermanager.go:605] Started "statefulset
s[1859051]: I0607 1859051 stateful_set.go:147] Starting stateful set controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for stateful set
s[1859051]: W0607 1859051 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
s[1859051]: I0607 1859051 controllermanager.go:605] Started "csrsigning
s[1859051]: I0607 1859051 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
s[1859051]: I0607 1859051 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
s[1859051]: I0607 1859051 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
s[1859051]: I0607 1859051 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
s[1859051]: I0607 1859051 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key
s[1859051]: I0607 1859051 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
s[1859051]: I0607 1859051 controllermanager.go:605] Started "ttl-after-finished
s[1859051]: I0607 1859051 ttlafterfinished_controller.go:109] Starting TTL after finished controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for TTL after finished
s[1859051]: I0607 1859051 controllermanager.go:605] Started "podgc
s[1859051]: I0607 1859051 gc_controller.go:89] Starting GC controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for GC
s[1859051]: I0607 1859051 controllermanager.go:605] Started "replicaset
s[1859051]: I0607 1859051 replica_set.go:186] Starting replicaset controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
s[1859051]: I0607 1859051 controllermanager.go:605] Started "persistentvolume-expander
s[1859051]: I0607 1859051 expand_controller.go:342] Starting expand controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for expand
s[1859051]: I0607 1859051 controllermanager.go:605] Started "root-ca-cert-publisher
s[1859051]: I0607 1859051 publisher.go:107] Starting root CA certificate configmap publisher
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for crt configmap
s[1859051]: I0607 1859051 controllermanager.go:605] Started "daemonset
s[1859051]: I0607 1859051 daemon_controller.go:284] Starting daemon sets controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for daemon sets
s[1859051]: I0607 1859051 controllermanager.go:605] Started "cronjob
s[1859051]: I0607 1859051 cronjob_controllerv2.go:132] "Starting cronjob controller v2
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for cronjob
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for resource quota
s[1859051]: I0607 1859051 job_controller.go:497] enqueueing job kube-system/helm-install-traefik-crd
s[1859051]: I0607 1859051 job_controller.go:497] enqueueing job kube-system/helm-install-traefik
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for TTL after finished
s[1859051]: I0607 1859051 job_controller.go:497] enqueueing job kube-system/helm-install-traefik-crd
s[1859051]: I0607 1859051 job_controller.go:497] enqueueing job kube-system/helm-install-traefik
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for HPA
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for cronjob
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for stateful set
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for endpoint
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for endpoint_slice
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for node
s[1859051]: I0607 1859051 range_allocator.go:173] Starting range CIDR allocator
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for cidrallocator
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for cidrallocator
s[1859051]: E0607 1859051 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for job
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for resource quota
s[1859051]: E0607 1859051 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for GC
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for persistent volume
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for garbage collector
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for attach detach
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for ReplicaSet
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for TTL
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for expand
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for crt configmap
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for PVC protection
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for PV protection
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for daemon sets
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for ReplicationController
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for resource quota
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for ephemeral
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for disruption
s[1859051]: I0607 1859051 disruption.go:371] Sending events to api server.
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for certificate-csrapproving
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for taint
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for deployment
s[1859051]: I0607 1859051 taint_manager.go:187] "Starting NoExecuteTaintManager
s[1859051]: I0607 1859051 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for namespace
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for service account
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for garbage collector
s[1859051]: I0607 1859051 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
s[1859051]: I0607 1859051 shared_informer.go:247] Caches are synced for garbage collector
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: E0607 1859051 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
s[1859051]: W0607 1859051 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: W0607 1859051 handler_proxy.go:104] no RequestInfo found in the context
s[1859051]: E0607 1859051 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
s[1859051]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
s[1859051]: I0607 1859051 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment