Skip to content

Instantly share code, notes, and snippets.

@harryzcy
Created May 30, 2024 17:14
Show Gist options
  • Save harryzcy/34f7bb0a54defffda64377f17b863609 to your computer and use it in GitHub Desktop.
Save harryzcy/34f7bb0a54defffda64377f17b863609 to your computer and use it in GitHub Desktop.
k3s-logs
INFO[0000] Starting k3s v1.28.10+k3s1 (a4c5612e)
INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s
INFO[0000] Configuring database table schema and indexes, this may take a moment...
INFO[0000] Database tables and indexes are up to date
INFO[0000] Kine available at unix://kine.sock
INFO[0000] Reconciling bootstrap data between datastore and disk
INFO[0000] certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1670897387: notBefore=2022-12-13 02:09:47 +0000 UTC notAfter=2025-05-30 16:44:38 +0000 UTC
INFO[0001] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
INFO[0001] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259
INFO[0001] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
INFO[0001] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --feature-gates=CloudDualStackNodeIPs=true --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false
I0530 12:44:38.783490 32845 options.go:220] external host was not specified, using 192.168.1.13
INFO[0001] Server node token is available at /var/lib/rancher/k3s/server/token
INFO[0001] To join server node to cluster: k3s server -s https://192.168.1.13:6443 -t ${SERVER_NODE_TOKEN}
I0530 12:44:38.787475 32845 server.go:156] Version: v1.28.10+k3s1
INFO[0001] Agent node token is available at /var/lib/rancher/k3s/server/agent-token
I0530 12:44:38.788324 32845 server.go:158] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
INFO[0001] To join agent node to cluster: k3s agent -s https://192.168.1.13:6443 -t ${AGENT_NODE_TOKEN}
INFO[0001] Waiting for API server to become available
INFO[0001] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[0001] Run: k3s-arm64 kubectl
I0530 12:44:38.856793 32845 shared_informer.go:311] Waiting for caches to sync for node_authorizer
I0530 12:44:38.890440 32845 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0530 12:44:38.890557 32845 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0530 12:44:38.892256 32845 instance.go:298] Using reconciler: lease
I0530 12:44:38.913233 32845 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0530 12:44:38.913330 32845 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0530 12:44:39.433340 32845 handler.go:275] Adding GroupVersion v1 to ResourceManager
I0530 12:44:39.434137 32845 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
I0530 12:44:39.837602 32845 trace.go:236] Trace[133458058]: "List(recursive=true) etcd3" audit-id:,key:/pods,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (30-May-2024 12:44:39.153) (total time: 683ms):
Trace[133458058]: [683.679182ms] [683.679182ms] END
I0530 12:44:40.084077 32845 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
I0530 12:44:40.271333 32845 trace.go:236] Trace[1023039167]: "List(recursive=true) etcd3" audit-id:,key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (30-May-2024 12:44:39.724) (total time: 546ms):
Trace[1023039167]: [546.322668ms] [546.322668ms] END
I0530 12:44:40.329431 32845 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
W0530 12:44:40.331680 32845 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.345218 32845 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.349496 32845 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
W0530 12:44:40.349615 32845 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
I0530 12:44:40.359630 32845 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
I0530 12:44:40.363186 32845 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
W0530 12:44:40.369994 32845 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
W0530 12:44:40.370049 32845 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
I0530 12:44:40.378223 32845 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
W0530 12:44:40.378307 32845 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
I0530 12:44:40.393882 32845 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
W0530 12:44:40.395546 32845 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.397433 32845 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.405012 32845 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
W0530 12:44:40.406849 32845 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.408556 32845 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
I0530 12:44:40.413559 32845 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
I0530 12:44:40.425954 32845 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
W0530 12:44:40.433729 32845 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.435430 32845 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.438847 32845 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
W0530 12:44:40.443068 32845 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.444662 32845 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.449860 32845 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
W0530 12:44:40.455355 32845 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
I0530 12:44:40.461257 32845 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
W0530 12:44:40.461391 32845 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.461431 32845 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.462913 32845 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
W0530 12:44:40.463003 32845 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.463036 32845 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.493268 32845 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
W0530 12:44:40.495565 32845 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.498173 32845 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.516078 32845 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
I0530 12:44:40.521986 32845 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
W0530 12:44:40.522100 32845 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.522138 32845 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.549147 32845 trace.go:236] Trace[1592969981]: "List(recursive=true) etcd3" audit-id:,key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (30-May-2024 12:44:38.908) (total time: 1640ms):
Trace[1592969981]: [1.640523579s] [1.640523579s] END
I0530 12:44:40.571008 32845 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
W0530 12:44:40.571105 32845 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
W0530 12:44:40.571137 32845 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
I0530 12:44:40.575069 32845 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
W0530 12:44:40.575197 32845 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0530 12:44:40.575414 32845 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0530 12:44:40.581296 32845 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
W0530 12:44:40.581657 32845 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
I0530 12:44:40.604856 32845 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
W0530 12:44:40.605300 32845 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0530 12:44:40.633058 32845 trace.go:236] Trace[2110204974]: "List(recursive=true) etcd3" audit-id:,key:/replicasets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (30-May-2024 12:44:39.978) (total time: 654ms):
Trace[2110204974]: [654.568691ms] [654.568691ms] END
I0530 12:44:42.026895 32845 trace.go:236] Trace[1886900666]: "List(recursive=true) etcd3" audit-id:,key:/secrets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (30-May-2024 12:44:38.939) (total time: 3086ms):
Trace[1886900666]: [3.086871629s] [3.086871629s] END
INFO[0004] Password verified locally for node harryzcy-3
INFO[0004] certificate CN=harryzcy-3 signed by CN=k3s-server-ca@1670897387: notBefore=2022-12-13 02:09:47 +0000 UTC notAfter=2025-05-30 16:44:42 +0000 UTC
I0530 12:44:42.435156 32845 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0530 12:44:42.435217 32845 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0530 12:44:42.436537 32845 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0530 12:44:42.437989 32845 secure_serving.go:213] Serving securely on 127.0.0.1:6444
I0530 12:44:42.438863 32845 controller.go:116] Starting legacy_token_tracking_controller
I0530 12:44:42.439313 32845 shared_informer.go:311] Waiting for caches to sync for configmaps
I0530 12:44:42.439594 32845 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0530 12:44:42.439799 32845 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0530 12:44:42.440114 32845 available_controller.go:423] Starting AvailableConditionController
I0530 12:44:42.440393 32845 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0530 12:44:42.440657 32845 aggregator.go:164] waiting for initial CRD sync...
I0530 12:44:42.440951 32845 apf_controller.go:374] Starting API Priority and Fairness config controller
I0530 12:44:42.441406 32845 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0530 12:44:42.441794 32845 customresource_discovery_controller.go:289] Starting DiscoveryController
I0530 12:44:42.442975 32845 handler_discovery.go:412] Starting ResourceDiscoveryManager
I0530 12:44:42.443197 32845 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0530 12:44:42.443705 32845 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0530 12:44:42.443760 32845 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
I0530 12:44:42.443843 32845 system_namespaces_controller.go:67] Starting system namespaces controller
I0530 12:44:42.444380 32845 gc_controller.go:78] Starting apiserver lease garbage collector
I0530 12:44:42.444537 32845 controller.go:78] Starting OpenAPI AggregationController
I0530 12:44:42.444632 32845 controller.go:80] Starting OpenAPI V3 AggregationController
I0530 12:44:42.444887 32845 controller.go:134] Starting OpenAPI controller
I0530 12:44:42.444997 32845 controller.go:85] Starting OpenAPI V3 controller
I0530 12:44:42.445051 32845 naming_controller.go:291] Starting NamingConditionController
I0530 12:44:42.445119 32845 establishing_controller.go:76] Starting EstablishingController
I0530 12:44:42.445167 32845 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0530 12:44:42.445210 32845 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0530 12:44:42.445252 32845 crd_finalizer.go:266] Starting CRDFinalizer
I0530 12:44:42.439954 32845 gc_controller.go:78] Starting apiserver lease garbage collector
I0530 12:44:42.446318 32845 crdregistration_controller.go:111] Starting crd-autoregister controller
I0530 12:44:42.446542 32845 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
I0530 12:44:42.451821 32845 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0530 12:44:42.452271 32845 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0530 12:44:42.725676 32845 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0530 12:44:42.742502 32845 handler.go:275] Adding GroupVersion longhorn.io v1beta1 to ResourceManager
I0530 12:44:42.743175 32845 handler.go:275] Adding GroupVersion longhorn.io v1beta2 to ResourceManager
I0530 12:44:42.743577 32845 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager
I0530 12:44:42.743759 32845 handler.go:275] Adding GroupVersion acme.cert-manager.io v1 to ResourceManager
I0530 12:44:42.746364 32845 shared_informer.go:318] Caches are synced for configmaps
I0530 12:44:42.746541 32845 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0530 12:44:42.755631 32845 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0530 12:44:42.758455 32845 shared_informer.go:318] Caches are synced for crd-autoregister
I0530 12:44:42.761053 32845 cache.go:39] Caches are synced for AvailableConditionController controller
I0530 12:44:42.763282 32845 apf_controller.go:379] Running API Priority and Fairness config worker
I0530 12:44:42.763402 32845 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0530 12:44:42.766478 32845 aggregator.go:166] initial CRD sync complete...
I0530 12:44:42.766615 32845 autoregister_controller.go:141] Starting autoregister controller
I0530 12:44:42.766694 32845 cache.go:32] Waiting for caches to sync for autoregister controller
I0530 12:44:42.766729 32845 cache.go:39] Caches are synced for autoregister controller
I0530 12:44:42.770593 32845 handler.go:275] Adding GroupVersion traefik.io v1alpha1 to ResourceManager
I0530 12:44:42.775661 32845 handler.go:275] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
I0530 12:44:42.776024 32845 handler.go:275] Adding GroupVersion traefik.containo.us v1alpha1 to ResourceManager
I0530 12:44:42.779937 32845 handler.go:275] Adding GroupVersion helm.cattle.io v1 to ResourceManager
I0530 12:44:42.793543 32845 handler.go:275] Adding GroupVersion postgresql.cnpg.io v1 to ResourceManager
W0530 12:44:42.839043 32845 handler_proxy.go:93] no RequestInfo found in the context
E0530 12:44:42.839407 32845 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0530 12:44:42.846067 32845 handler_proxy.go:137] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
I0530 12:44:42.846316 32845 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0530 12:44:42.875831 32845 shared_informer.go:318] Caches are synced for node_authorizer
E0530 12:44:43.043147 32845 controller.go:102] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0530 12:44:43.074684 32845 trace.go:236] Trace[1167932533]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:df4c69c0-f88a-4545-8ae9-0cac96ae1eed,client:127.0.0.1,protocol:HTTP/2.0,resource:secrets,scope:cluster,url:/api/v1/secrets,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:LIST (30-May-2024 12:44:42.525) (total time: 549ms):
Trace[1167932533]: ---"Writing http response done" count:107 549ms (12:44:43.074)
Trace[1167932533]: [549.543487ms] [549.543487ms] END
INFO[0005] certificate CN=system:node:harryzcy-3,O=system:nodes signed by CN=k3s-client-ca@1670897387: notBefore=2022-12-13 02:09:47 +0000 UTC notAfter=2025-05-30 16:44:43 +0000 UTC
I0530 12:44:43.560757 32845 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0530 12:44:43.816949 32845 handler_proxy.go:93] no RequestInfo found in the context
E0530 12:44:43.817553 32845 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0530 12:44:43.818112 32845 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0530 12:44:43.935072 32845 handler_proxy.go:93] no RequestInfo found in the context
E0530 12:44:43.935258 32845 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0530 12:44:43.935301 32845 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
INFO[0006] Module overlay was already loaded
INFO[0006] Module nf_conntrack was already loaded
INFO[0006] Module br_netfilter was already loaded
INFO[0006] Module iptable_nat was already loaded
INFO[0006] Module iptable_filter was already loaded
W0530 12:44:44.155143 32845 sysinfo.go:203] Nodes topology is not available, providing CPU topology
INFO[0006] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[0006] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[0007] Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory"
INFO[0008] Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet
INFO[0009] containerd is now running
INFO[0009] Connecting to proxy url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0009] Creating k3s-cert-monitor event broadcaster
INFO[0009] Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=harryzcy-3 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-ip=192.168.1.13,2604:2b40:2190:35:da3a:ddff:fe0c:d78f --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key
INFO[0009] Handling backend connection request [harryzcy-3]
INFO[0009] Remotedialer connected to proxy url="wss://127.0.0.1:6443/v1-k3s/connect"
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
I0530 12:44:47.223636 32845 server.go:202] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0530 12:44:47.231055 32845 server.go:462] "Kubelet version" kubeletVersion="v1.28.10+k3s1"
I0530 12:44:47.231152 32845 server.go:464] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0530 12:44:47.235640 32845 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
INFO[0009] Annotations and labels have already set on node: harryzcy-3
INFO[0009] Starting flannel with backend vxlan
INFO[0009] Flannel found PodCIDR assigned for node harryzcy-3
W0530 12:44:47.266220 32845 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W0530 12:44:47.269328 32845 machine.go:65] Cannot read vendor id correctly, set empty.
I0530 12:44:47.272209 32845 server.go:720] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0530 12:44:47.273600 32845 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
INFO[0009] The interface eth0 with ipv4 address 192.168.1.13 will be used by flannel
I0530 12:44:47.274555 32845 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"/k3s","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
I0530 12:44:47.274790 32845 topology_manager.go:138] "Creating topology manager with none policy"
I0530 12:44:47.274944 32845 container_manager_linux.go:301] "Creating device plugin manager"
I0530 12:44:47.275458 32845 state_mem.go:36] "Initialized new in-memory state store"
I0530 12:44:47.276115 32845 kubelet.go:393] "Attempting to sync node with API server"
I0530 12:44:47.276229 32845 kubelet.go:298] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0530 12:44:47.276372 32845 kubelet.go:309] "Adding apiserver pod source"
I0530 12:44:47.276455 32845 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0530 12:44:47.280703 32845 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.15-k3s1" apiVersion="v1"
I0530 12:44:47.283196 32845 server.go:1227] "Started kubelet"
I0530 12:44:47.286879 32845 kube.go:139] Waiting 10m0s for node controller to sync
I0530 12:44:47.287474 32845 kube.go:461] Starting kube subnet manager
I0530 12:44:47.292654 32845 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
I0530 12:44:47.296946 32845 server.go:462] "Adding debug handlers to kubelet server"
E0530 12:44:47.298597 32845 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0530 12:44:47.298726 32845 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0530 12:44:47.300913 32845 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
I0530 12:44:47.302147 32845 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
I0530 12:44:47.302151 32845 scope.go:117] "RemoveContainer" containerID="e94437ff79c818e67dd5c438cfd10011aae61fc6a6a3f68646f5ff694825955f"
I0530 12:44:47.308336 32845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0530 12:44:47.327645 32845 volume_manager.go:291] "Starting Kubelet Volume Manager"
I0530 12:44:47.329071 32845 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
I0530 12:44:47.345310 32845 reconciler_new.go:29] "Reconciler: start to sync state"
I0530 12:44:47.358472 32845 kube.go:482] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.2.0/24]
I0530 12:44:47.358672 32845 kube.go:482] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.1.0/24]
I0530 12:44:47.358733 32845 kube.go:482] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
I0530 12:44:47.388153 32845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
I0530 12:44:47.396270 32845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
I0530 12:44:47.396641 32845 status_manager.go:217] "Starting to sync pod status with apiserver"
I0530 12:44:47.397054 32845 kubelet.go:2303] "Starting kubelet main sync loop"
E0530 12:44:47.397686 32845 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
E0530 12:44:47.438734 32845 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache"
I0530 12:44:47.448556 32845 kubelet_node_status.go:70] "Attempting to register node" node="harryzcy-3"
E0530 12:44:47.490873 32845 fieldmanager.go:155] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (/harryzcy-3; /v1, Kind=Node) to smd typed: .status.addresses: duplicate entries for key [type=\"InternalIP\"]" versionKind="/, Kind=" namespace="" name="harryzcy-3"
E0530 12:44:47.499049 32845 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
I0530 12:44:47.552429 32845 kubelet_node_status.go:108] "Node was previously registered" node="harryzcy-3"
I0530 12:44:47.552826 32845 kubelet_node_status.go:73] "Successfully registered node" node="harryzcy-3"
I0530 12:44:47.559848 32845 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
I0530 12:44:47.563126 32845 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
I0530 12:44:47.568981 32845 setters.go:552] "Node became not ready" node="harryzcy-3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-30T16:44:47Z","lastTransitionTime":"2024-05-30T16:44:47Z","reason":"KubeletNotReady","message":"container runtime status check may not have completed yet"}
INFO[0009] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error
INFO[0009] Starting network policy controller version v2.1.0, built on 2024-05-22T21:24:30Z, go1.21.9
I0530 12:44:47.633576 32845 network_policy_controller.go:164] Starting network policy controller
E0530 12:44:47.703559 32845 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
I0530 12:44:47.883110 32845 network_policy_controller.go:176] Starting network policy controller full sync goroutine
E0530 12:44:48.104116 32845 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
INFO[0010] Stopped tunnel to 127.0.0.1:6443
INFO[0010] Connecting to proxy url="wss://192.168.1.13:6443/v1-k3s/connect"
INFO[0010] Proxy done err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0010] error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF
INFO[0010] Handling backend connection request [harryzcy-3]
INFO[0010] Remotedialer connected to proxy url="wss://192.168.1.13:6443/v1-k3s/connect"
I0530 12:44:48.278542 32845 apiserver.go:52] "Watching apiserver"
I0530 12:44:48.289665 32845 kube.go:146] Node controller sync successful
I0530 12:44:48.291309 32845 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0530 12:44:48.301851 32845 kube.go:621] List of node(harryzcy-3) annotations: map[string]string{"alpha.kubernetes.io/provided-node-ip":"192.168.1.13,2604:2b40:2190:35:da3a:ddff:fe0c:d78f", "csi.volume.kubernetes.io/nodeid":"{\"driver.longhorn.io\":\"harryzcy-3\"}", "flannel.alpha.coreos.com/backend-data":"{\"VNI\":1,\"VtepMAC\":\"c2:0a:ed:23:7c:cf\"}", "flannel.alpha.coreos.com/backend-type":"vxlan", "flannel.alpha.coreos.com/backend-v6-data":"{\"VNI\":1,\"VtepMAC\":\"42:bc:dc:91:62:d3\"}", "flannel.alpha.coreos.com/kube-subnet-manager":"true", "flannel.alpha.coreos.com/public-ip":"192.168.1.13", "flannel.alpha.coreos.com/public-ipv6":"2604:2b40:2190:35:da3a:ddff:fe0c:d78f", "k3s.io/hostname":"harryzcy-3", "k3s.io/internal-ip":"192.168.1.13,2604:2b40:2190:35:da3a:ddff:fe0c:d78f", "k3s.io/node-args":"[\"server\"]", "k3s.io/node-config-hash":"4YDOORYHLBFJUMUJNXFABGGP57RMNM2OA3G2T3G4K2VKPHBLQFDA====", "k3s.io/node-env":"{\"K3S_DATA_DIR\":\"/var/lib/rancher/k3s/data/aeafc1b8825e127c8b16582996273fb84d4af7d2fc8a6f633f05eaf5a2074a33\"}", "node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"}
I0530 12:44:48.302004 32845 vxlan.go:155] Setup flannel.1 mac address to c2:0a:ed:23:7c:cf when flannel restarts
I0530 12:44:48.364017 32845 iptables.go:290] generated 3 rules
INFO[0010] Wrote flannel subnet file to /run/flannel/subnet.env
INFO[0010] Running flannel backend.
I0530 12:44:48.364322 32845 vxlan_network.go:65] watching for new subnet leases
I0530 12:44:48.364464 32845 subnet.go:160] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0200, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0xc0a8010b, PublicIPv6:(*ip.IP6)(nil), BackendType:"vxlan", BackendData:json.RawMessage{0x7b, 0x22, 0x56, 0x4e, 0x49, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x56, 0x74, 0x65, 0x70, 0x4d, 0x41, 0x43, 0x22, 0x3a, 0x22, 0x36, 0x65, 0x3a, 0x31, 0x36, 0x3a, 0x62, 0x62, 0x3a, 0x65, 0x63, 0x3a, 0x63, 0x39, 0x3a, 0x62, 0x39, 0x22, 0x7d}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
I0530 12:44:48.364648 32845 subnet.go:160] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0100, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0xc0a8010c, PublicIPv6:(*ip.IP6)(nil), BackendType:"vxlan", BackendData:json.RawMessage{0x7b, 0x22, 0x56, 0x4e, 0x49, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x56, 0x74, 0x65, 0x70, 0x4d, 0x41, 0x43, 0x22, 0x3a, 0x22, 0x66, 0x36, 0x3a, 0x61, 0x35, 0x3a, 0x32, 0x64, 0x3a, 0x64, 0x38, 0x3a, 0x63, 0x64, 0x3a, 0x31, 0x61, 0x22, 0x7d}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
I0530 12:44:48.370129 32845 iptables.go:290] generated 7 rules
I0530 12:44:48.418887 32845 iptables.go:283] bootstrap done
I0530 12:44:48.449298 32845 cpu_manager.go:214] "Starting CPU manager" policy="none"
I0530 12:44:48.449388 32845 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
I0530 12:44:48.449457 32845 state_mem.go:36] "Initialized new in-memory state store"
I0530 12:44:48.450086 32845 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0530 12:44:48.450207 32845 state_mem.go:96] "Updated CPUSet assignments" assignments={}
I0530 12:44:48.450244 32845 policy_none.go:49] "None policy: Start"
I0530 12:44:48.456750 32845 memory_manager.go:169] "Starting memorymanager" policy="None"
I0530 12:44:48.456878 32845 state_mem.go:35] "Initializing new in-memory state store"
I0530 12:44:48.457590 32845 state_mem.go:75] "Updated machine memory state"
I0530 12:44:48.462483 32845 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I0530 12:44:48.465880 32845 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
I0530 12:44:48.489065 32845 iptables.go:283] bootstrap done
I0530 12:44:48.904938 32845 topology_manager.go:215] "Topology Admit Handler" podUID="92bc3865-5f6f-4144-9b78-918a9ad1290c" podNamespace="kube-system" podName="coredns-59b4f5bbd5-cznj7"
I0530 12:44:48.905610 32845 topology_manager.go:215] "Topology Admit Handler" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df" podNamespace="cert-manager" podName="cert-manager-webhook-649b4d699f-bgkjm"
I0530 12:44:48.909110 32845 topology_manager.go:215] "Topology Admit Handler" podUID="a3d84a12-441b-4f14-b54b-bebd7bc08a69" podNamespace="cert-manager" podName="cert-manager-7bfbbd5f46-p9slz"
I0530 12:44:48.909855 32845 topology_manager.go:215] "Topology Admit Handler" podUID="7ffa81bc-7d92-45fc-a2dc-86faa0298ed8" podNamespace="default" podName="cylink-monitor-6d47dc7c7f-jm28d"
I0530 12:44:48.910293 32845 topology_manager.go:215] "Topology Admit Handler" podUID="e287e774-829b-4788-b958-8829168f0364" podNamespace="default" podName="instant-push-replica-65c76ccd7c-h864k"
I0530 12:44:48.910648 32845 topology_manager.go:215] "Topology Admit Handler" podUID="8a8a4bf3-2207-452c-ae7e-1f9c18003fee" podNamespace="default" podName="sally-5c49c7b77c-mg556"
I0530 12:44:48.910997 32845 topology_manager.go:215] "Topology Admit Handler" podUID="a9ce52b3-4bd5-4e45-83cd-17f4d6b42b54" podNamespace="default" podName="authelia-6b9d9c7877-svgx4"
I0530 12:44:48.911317 32845 topology_manager.go:215] "Topology Admit Handler" podUID="f3d4def7-b56a-4231-bf80-472c49ff730c" podNamespace="default" podName="portable-vault-pg-c0-1"
I0530 12:44:48.911669 32845 topology_manager.go:215] "Topology Admit Handler" podUID="0869bc86-9fb8-4741-9a1e-08e7dd47015a" podNamespace="kube-system" podName="svclb-traefik-c9a9aff2-xp4t6"
I0530 12:44:48.912195 32845 topology_manager.go:215] "Topology Admit Handler" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031" podNamespace="default" podName="forex-pg-c0-2"
I0530 12:44:48.912799 32845 topology_manager.go:215] "Topology Admit Handler" podUID="42e5ba9f-d3e8-4b5e-bee9-3de30f9dc6c6" podNamespace="kube-system" podName="traefik-d7dbd656f-h954k"
I0530 12:44:48.913334 32845 topology_manager.go:215] "Topology Admit Handler" podUID="bf077e0d-307f-4ef4-94df-2fc86192c42f" podNamespace="longhorn-system" podName="instance-manager-0906d2669d4035c2b89d402644d3bd9b"
I0530 12:44:48.913888 32845 topology_manager.go:215] "Topology Admit Handler" podUID="9416c8ec-8ba0-4980-af88-826ae8ad8539" podNamespace="longhorn-system" podName="longhorn-manager-c8zrc"
I0530 12:44:48.914427 32845 topology_manager.go:215] "Topology Admit Handler" podUID="f7ac7748-03b2-4b97-9679-067f7c99290d" podNamespace="longhorn-system" podName="engine-image-ei-5cefaf2b-jz2zm"
I0530 12:44:48.914998 32845 topology_manager.go:215] "Topology Admit Handler" podUID="fb26427c-2761-4f44-bc60-3b6c59bd8fa7" podNamespace="longhorn-system" podName="longhorn-ui-6d89c47858-djwxb"
I0530 12:44:48.916122 32845 topology_manager.go:215] "Topology Admit Handler" podUID="628e3f2e-676a-43f4-8cc3-38835eb9234a" podNamespace="longhorn-system" podName="instance-manager-949723cd7aea7a9ffd8c930440ec6c90"
I0530 12:44:48.917001 32845 topology_manager.go:215] "Topology Admit Handler" podUID="b0049f06-952d-492b-8474-c83f1be671a3" podNamespace="longhorn-system" podName="csi-attacher-5c4bfdcf59-llv6p"
I0530 12:44:48.917836 32845 topology_manager.go:215] "Topology Admit Handler" podUID="f23e23a4-6c6f-4b2a-9293-60513a318002" podNamespace="longhorn-system" podName="csi-provisioner-667796df57-gh85f"
I0530 12:44:48.918634 32845 topology_manager.go:215] "Topology Admit Handler" podUID="2e6d4d1b-f46a-4d23-be82-9784b9e34a37" podNamespace="longhorn-system" podName="csi-resizer-694f8f5f64-8nbbh"
I0530 12:44:48.919514 32845 topology_manager.go:215] "Topology Admit Handler" podUID="b79d7d86-2a71-496a-9002-242328ec6c13" podNamespace="longhorn-system" podName="csi-snapshotter-959b69d4b-k6rtg"
I0530 12:44:48.920460 32845 topology_manager.go:215] "Topology Admit Handler" podUID="887b3014-f33a-4176-bc1c-5107e9d2ab8f" podNamespace="longhorn-system" podName="longhorn-csi-plugin-xsc9d"
I0530 12:44:48.921473 32845 topology_manager.go:215] "Topology Admit Handler" podUID="116fd16b-b81c-4750-a0be-682ed21c14f6" podNamespace="default" podName="gitea-0"
I0530 12:44:48.922305 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35022b9c2a2c28aed4183a516603d2e169f8ed36dcb118636b17d01d155abd97"
I0530 12:44:48.924179 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9896bf1c290fed34cb90a7a30604b0f78c0cb58a89d18160d596f1b8bf68cda8"
I0530 12:44:48.924911 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8836cdbe0d5605432f9dbf0d7990b93568621c0a2130716361736b27658d8a2"
I0530 12:44:48.927818 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ba2dcc969f41d4b7c760727c8f7c6797680aebbc3031c9587afdb79f6f3747"
I0530 12:44:48.928400 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b86700fec3f2d30f0c97c74b1929db2e3074d573ecdeb1cb8298a60e0f9266a"
I0530 12:44:48.928511 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94437ff79c818e67dd5c438cfd10011aae61fc6a6a3f68646f5ff694825955f"
I0530 12:44:48.928559 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d624101ee06d02fb6e5b45ac5538352d68693717caa53b44bc64a5c1896b62d"
I0530 12:44:48.928645 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f515de570d259fb0e767794b6db375b5b34e9e65b50e5f7a2273baecf1fcce"
I0530 12:44:48.928705 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f805ce0f90ba0f0e8d048ff663ebf30c30b12245e26c839e89f2845e6e487d06"
I0530 12:44:48.928765 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee77805721222314967d18745c8b94754f6df52d4e3f1eca5a2c105c909a9f71"
I0530 12:44:48.928859 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e2d4eb4060d89d84cfb58cc3eaeb81be05f17addc954db4876ce5d30040dadf"
I0530 12:44:48.928930 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745547dc1bb71cf034565305d9e01f1f7283db84d223c3a8a8463185d8c9e667"
I0530 12:44:48.929026 32845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f76fb846c01bdfc4568ab8b978ff7f88d0436379b0819ac927ce1db9079395"
I0530 12:44:48.962497 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b79d7d86-2a71-496a-9002-242328ec6c13-socket-dir\") pod \"csi-snapshotter-959b69d4b-k6rtg\" (UID: \"b79d7d86-2a71-496a-9002-242328ec6c13\") " pod="longhorn-system/csi-snapshotter-959b69d4b-k6rtg"
I0530 12:44:49.024502 32845 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
I0530 12:44:49.065051 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
I0530 12:44:49.065490 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-socket-dir\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.066803 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-host\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
E0530 12:44:49.067785 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:44:49.567688284 -0400 EDT m=+11.861061235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
I0530 12:44:49.069859 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-lib-modules\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.070855 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f23e23a4-6c6f-4b2a-9293-60513a318002-socket-dir\") pod \"csi-provisioner-667796df57-gh85f\" (UID: \"f23e23a4-6c6f-4b2a-9293-60513a318002\") " pod="longhorn-system/csi-provisioner-667796df57-gh85f"
I0530 12:44:49.073262 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7ac7748-03b2-4b97-9679-067f7c99290d-data\") pod \"engine-image-ei-5cefaf2b-jz2zm\" (UID: \"f7ac7748-03b2-4b97-9679-067f7c99290d\") " pod="longhorn-system/engine-image-ei-5cefaf2b-jz2zm"
I0530 12:44:49.074325 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0049f06-952d-492b-8474-c83f1be671a3-socket-dir\") pod \"csi-attacher-5c4bfdcf59-llv6p\" (UID: \"b0049f06-952d-492b-8474-c83f1be671a3\") " pod="longhorn-system/csi-attacher-5c4bfdcf59-llv6p"
I0530 12:44:49.076351 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9416c8ec-8ba0-4980-af88-826ae8ad8539-proc\") pod \"longhorn-manager-c8zrc\" (UID: \"9416c8ec-8ba0-4980-af88-826ae8ad8539\") " pod="longhorn-system/longhorn-manager-c8zrc"
I0530 12:44:49.079301 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b01d8a2a-0637-41ee-aef8-e2b07351e4fe\" (UniqueName: \"kubernetes.io/host-path/f3d4def7-b56a-4231-bf80-472c49ff730c-pvc-b01d8a2a-0637-41ee-aef8-e2b07351e4fe\") pod \"portable-vault-pg-c0-1\" (UID: \"f3d4def7-b56a-4231-bf80-472c49ff730c\") " pod="default/portable-vault-pg-c0-1"
I0530 12:44:49.079640 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-dev\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-host-dev\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.080224 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"longhorn\" (UniqueName: \"kubernetes.io/host-path/9416c8ec-8ba0-4980-af88-826ae8ad8539-longhorn\") pod \"longhorn-manager-c8zrc\" (UID: \"9416c8ec-8ba0-4980-af88-826ae8ad8539\") " pod="longhorn-system/longhorn-manager-c8zrc"
I0530 12:44:49.080694 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9416c8ec-8ba0-4980-af88-826ae8ad8539-dev\") pod \"longhorn-manager-c8zrc\" (UID: \"9416c8ec-8ba0-4980-af88-826ae8ad8539\") " pod="longhorn-system/longhorn-manager-c8zrc"
I0530 12:44:49.080892 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-csi-dir\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-kubernetes-csi-dir\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.081772 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-sys\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-host-sys\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.082138 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e6d4d1b-f46a-4d23-be82-9784b9e34a37-socket-dir\") pod \"csi-resizer-694f8f5f64-8nbbh\" (UID: \"2e6d4d1b-f46a-4d23-be82-9784b9e34a37\") " pod="longhorn-system/csi-resizer-694f8f5f64-8nbbh"
I0530 12:44:49.082313 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-603474eb-3862-46a1-a099-773f47fe904b\" (UniqueName: \"kubernetes.io/host-path/fe303a02-b17f-48ef-bfd7-28d0fbfd6031-pvc-603474eb-3862-46a1-a099-773f47fe904b\") pod \"forex-pg-c0-2\" (UID: \"fe303a02-b17f-48ef-bfd7-28d0fbfd6031\") " pod="default/forex-pg-c0-2"
I0530 12:44:49.082472 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-registration-dir\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.082862 32845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pods-mount-dir\" (UniqueName: \"kubernetes.io/host-path/887b3014-f33a-4176-bc1c-5107e9d2ab8f-pods-mount-dir\") pod \"longhorn-csi-plugin-xsc9d\" (UID: \"887b3014-f33a-4176-bc1c-5107e9d2ab8f\") " pod="longhorn-system/longhorn-csi-plugin-xsc9d"
I0530 12:44:49.208914 32845 scope.go:117] "RemoveContainer" containerID="44640fbc1e778920aedbc2efcd5ef536a5771be7380f7b4ba16a86cd12f94608"
I0530 12:44:49.231559 32845 scope.go:117] "RemoveContainer" containerID="480af378d82daf3e4af7bbb3508fc0bbfceb3b8513328c35621141923dadd202"
I0530 12:44:49.233204 32845 scope.go:117] "RemoveContainer" containerID="252d48df4f4e2f449a6de62dcce52e5d7101569b8deb7ab1ac5fb1534ee66e9d"
I0530 12:44:49.237725 32845 scope.go:117] "RemoveContainer" containerID="08c5de51da1e874c248cc645358ad475e51076ff986d4b2c725da2a095b29769"
I0530 12:44:49.239287 32845 scope.go:117] "RemoveContainer" containerID="74e14af5da94d9184ed79437439888fa27d1eff5534f6bb74886c4edab2d0d68"
I0530 12:44:49.240162 32845 scope.go:117] "RemoveContainer" containerID="2d6a6c86f2ec446eb0fcd5bef8eb95c534fdfa73f886031b86d321c0563a950d"
I0530 12:44:49.242508 32845 scope.go:117] "RemoveContainer" containerID="86720979cfe77517b3e41ada274dc6a80dcb92e9adac83bd5ffa61986ad37a12"
I0530 12:44:49.244872 32845 scope.go:117] "RemoveContainer" containerID="3461ccfe9e4876c3b5f50bf4636bfaebfdf637289ae88104af76e45415f92e3e"
I0530 12:44:49.247735 32845 scope.go:117] "RemoveContainer" containerID="91ec76b1e4118a68b6038cbc0356ee1f41582c37f0cbd946f17d399a33f81d5f"
I0530 12:44:49.250706 32845 scope.go:117] "RemoveContainer" containerID="01a1e2d94ea2fbe55d2bb3c2e60273246c6fc44ad6ce4c4962ef0bae9a3b778d"
I0530 12:44:49.260845 32845 scope.go:117] "RemoveContainer" containerID="0f53a3e18cbfa49e0c3058376cbc637d6aef3cadbef2eb85bbdc970c10eff7dd"
I0530 12:44:49.588803 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
E0530 12:44:49.589154 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:44:50.589051758 -0400 EDT m=+12.882424672 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
E0530 12:44:50.417554 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61\" instead: unknown" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
E0530 12:44:50.417935 32845 kuberuntime_manager.go:1261] container &Container{Name:cert-manager-webhook,Image:quay.io/jetstack/cert-manager-webhook:v1.13.2,Command:[],Args:[--v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE).svc],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xktmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61" instead: unknown
E0530 12:44:50.418129 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61\\\" instead: unknown\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
E0530 12:44:50.446809 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2\" instead: unknown" containerID="4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2"
E0530 12:44:50.447210 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-snapshotter,Image:longhornio/csi-snapshotter:v6.3.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4l89b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshotter-959b69d4b-k6rtg_longhorn-system(b79d7d86-2a71-496a-9002-242328ec6c13): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2" instead: unknown
E0530 12:44:50.447501 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshotter\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2\\\" instead: unknown\"" pod="longhorn-system/csi-snapshotter-959b69d4b-k6rtg" podUID="b79d7d86-2a71-496a-9002-242328ec6c13"
I0530 12:44:50.604141 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
E0530 12:44:50.604550 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:44:52.604427915 -0400 EDT m=+14.897800829 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
I0530 12:44:50.908239 32845 scope.go:117] "RemoveContainer" containerID="252d48df4f4e2f449a6de62dcce52e5d7101569b8deb7ab1ac5fb1534ee66e9d"
I0530 12:44:50.909964 32845 scope.go:117] "RemoveContainer" containerID="4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2"
E0530 12:44:50.911053 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-snapshotter pod=csi-snapshotter-959b69d4b-k6rtg_longhorn-system(b79d7d86-2a71-496a-9002-242328ec6c13)\"" pod="longhorn-system/csi-snapshotter-959b69d4b-k6rtg" podUID="b79d7d86-2a71-496a-9002-242328ec6c13"
I0530 12:44:51.045636 32845 scope.go:117] "RemoveContainer" containerID="44640fbc1e778920aedbc2efcd5ef536a5771be7380f7b4ba16a86cd12f94608"
I0530 12:44:51.046044 32845 scope.go:117] "RemoveContainer" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
E0530 12:44:51.047831 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
E0530 12:44:51.210079 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/pode287e774-829b-4788-b958-8829168f0364/f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd\" instead: unknown" containerID="f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd"
E0530 12:44:51.211869 32845 kuberuntime_manager.go:1261] container &Container{Name:instant-push,Image:registry.zcy.dev/instant-push:2.6.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8087,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:INSTANCE_TYPE,Value:replica,ValueFrom:nil,},EnvVar{Name:MASTER_HOST,Value:instant-push.default.svc.cluster.local,ValueFrom:nil,},EnvVar{Name:AUTHELIA_DOMAIN,Value:https://auth.zcy.dev,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{50 -3} {<nil>} 50m DecimalSI},memory: {{16777216 0} {<nil>} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {<nil>} 50m DecimalSI},memory: {{16777216 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data-volume,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9zkvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:nil,SecretRef:&SecretEnvSource{LocalObjectReference:LocalObjectReference{Name:instant-push-secret-g8gcf24c5h,},Optional:nil,},},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod instant-push-replica-65c76ccd7c-h864k_default(e287e774-829b-4788-b958-8829168f0364): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/pode287e774-829b-4788-b958-8829168f0364/f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd" instead: unknown
E0530 12:44:51.212141 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"instant-push\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/pode287e774-829b-4788-b958-8829168f0364/f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd\\\" instead: unknown\"" pod="default/instant-push-replica-65c76ccd7c-h864k" podUID="e287e774-829b-4788-b958-8829168f0364"
E0530 12:44:51.329405 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e\" instead: unknown" containerID="b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e"
E0530 12:44:51.329702 32845 kuberuntime_manager.go:1261] container &Container{Name:longhorn-ui,Image:longhornio/longhorn-ui:v1.6.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:LONGHORN_MANAGER_IP,Value:http://longhorn-backend:9500,ValueFrom:nil,},EnvVar{Name:LONGHORN_UI_PORT,Value:8000,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:nginx-cache,ReadOnly:false,MountPath:/var/cache/nginx/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:nginx-config,ReadOnly:false,MountPath:/var/config/nginx/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vwnsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod longhorn-ui-6d89c47858-djwxb_longhorn-system(fb26427c-2761-4f44-bc60-3b6c59bd8fa7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e" instead: unknown
E0530 12:44:51.329883 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-ui\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e\\\" instead: unknown\"" pod="longhorn-system/longhorn-ui-6d89c47858-djwxb" podUID="fb26427c-2761-4f44-bc60-3b6c59bd8fa7"
E0530 12:44:51.431063 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618\" instead: unknown" containerID="ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618"
E0530 12:44:51.431814 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-attacher,Image:longhornio/csi-attacher:v4.4.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g4kmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-attacher-5c4bfdcf59-llv6p_longhorn-system(b0049f06-952d-492b-8474-c83f1be671a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618" instead: unknown
E0530 12:44:51.432002 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-attacher\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618\\\" instead: unknown\"" pod="longhorn-system/csi-attacher-5c4bfdcf59-llv6p" podUID="b0049f06-952d-492b-8474-c83f1be671a3"
E0530 12:44:51.454386 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404\" instead: unknown" containerID="3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404"
E0530 12:44:51.454681 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-resizer,Image:longhornio/csi-resizer:v1.9.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE) --leader-election-namespace=$(POD_NAMESPACE) --handle-volume-inuse-error=false],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fc5w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-resizer-694f8f5f64-8nbbh_longhorn-system(2e6d4d1b-f46a-4d23-be82-9784b9e34a37): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404" instead: unknown
E0530 12:44:51.454859 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-resizer\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404\\\" instead: unknown\"" pod="longhorn-system/csi-resizer-694f8f5f64-8nbbh" podUID="2e6d4d1b-f46a-4d23-be82-9784b9e34a37"
E0530 12:44:51.478391 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971\" instead: unknown" containerID="d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971"
E0530 12:44:51.478633 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-provisioner,Image:longhornio/csi-provisioner:v3.6.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE) --default-fstype=ext4],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pzjgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-provisioner-667796df57-gh85f_longhorn-system(f23e23a4-6c6f-4b2a-9293-60513a318002): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971" instead: unknown
E0530 12:44:51.478826 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971\\\" instead: unknown\"" pod="longhorn-system/csi-provisioner-667796df57-gh85f" podUID="f23e23a4-6c6f-4b2a-9293-60513a318002"
E0530 12:44:51.560400 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podfe303a02-b17f-48ef-bfd7-28d0fbfd6031/453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5\" instead: unknown" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
E0530 12:44:51.560806 32845 kuberuntime_manager.go:1261] container &Container{Name:postgres,Image:ghcr.io/cloudnative-pg/postgresql:16.2,Command:[/controller/manager instance run --log-level=info],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:postgresql,HostPort:0,ContainerPort:5432,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9187,Protocol:TCP,HostIP:,},ContainerPort{Name:status,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:PGDATA,Value:/var/lib/postgresql/data/pgdata,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:forex-pg-c0-2,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:default,ValueFrom:nil,},EnvVar{Name:CLUSTER_NAME,Value:forex-pg-c0,ValueFrom:nil,},EnvVar{Name:PGPORT,Value:5432,ValueFrom:nil,},EnvVar{Name:PGHOST,Value:/controller/run,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:pgdata,ReadOnly:false,MountPath:/var/lib/postgresql/data,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:scratch-data,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:scratch-data,ReadOnly:false,MountPath:/controller,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:shm,ReadOnly:false,MountPath:/dev/shm,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:app-secret,ReadOnly:false,MountPath:/etc/app-secret,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6mcv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podfe303a02-b17f-48ef-bfd7-28d0fbfd6031/453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5" instead: unknown
E0530 12:44:51.567454 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podfe303a02-b17f-48ef-bfd7-28d0fbfd6031/453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5\\\" instead: unknown\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
I0530 12:44:52.059117 32845 scope.go:117] "RemoveContainer" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
E0530 12:44:52.060557 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
I0530 12:44:52.118189 32845 scope.go:117] "RemoveContainer" containerID="91ec76b1e4118a68b6038cbc0356ee1f41582c37f0cbd946f17d399a33f81d5f"
I0530 12:44:52.119090 32845 scope.go:117] "RemoveContainer" containerID="d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971"
E0530 12:44:52.119869 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-provisioner pod=csi-provisioner-667796df57-gh85f_longhorn-system(f23e23a4-6c6f-4b2a-9293-60513a318002)\"" pod="longhorn-system/csi-provisioner-667796df57-gh85f" podUID="f23e23a4-6c6f-4b2a-9293-60513a318002"
I0530 12:44:52.142692 32845 scope.go:117] "RemoveContainer" containerID="b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e"
E0530 12:44:52.143875 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-ui\" with CrashLoopBackOff: \"back-off 10s restarting failed container=longhorn-ui pod=longhorn-ui-6d89c47858-djwxb_longhorn-system(fb26427c-2761-4f44-bc60-3b6c59bd8fa7)\"" pod="longhorn-system/longhorn-ui-6d89c47858-djwxb" podUID="fb26427c-2761-4f44-bc60-3b6c59bd8fa7"
I0530 12:44:52.166402 32845 scope.go:117] "RemoveContainer" containerID="ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618"
E0530 12:44:52.167112 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-attacher pod=csi-attacher-5c4bfdcf59-llv6p_longhorn-system(b0049f06-952d-492b-8474-c83f1be671a3)\"" pod="longhorn-system/csi-attacher-5c4bfdcf59-llv6p" podUID="b0049f06-952d-492b-8474-c83f1be671a3"
I0530 12:44:52.191220 32845 scope.go:117] "RemoveContainer" containerID="86720979cfe77517b3e41ada274dc6a80dcb92e9adac83bd5ffa61986ad37a12"
I0530 12:44:52.221265 32845 scope.go:117] "RemoveContainer" containerID="3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404"
E0530 12:44:52.223692 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-resizer pod=csi-resizer-694f8f5f64-8nbbh_longhorn-system(2e6d4d1b-f46a-4d23-be82-9784b9e34a37)\"" pod="longhorn-system/csi-resizer-694f8f5f64-8nbbh" podUID="2e6d4d1b-f46a-4d23-be82-9784b9e34a37"
I0530 12:44:52.236366 32845 scope.go:117] "RemoveContainer" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
E0530 12:44:52.237535 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with CrashLoopBackOff: \"back-off 10s restarting failed container=postgres pod=forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031)\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
I0530 12:44:52.240639 32845 scope.go:117] "RemoveContainer" containerID="08c5de51da1e874c248cc645358ad475e51076ff986d4b2c725da2a095b29769"
I0530 12:44:52.255866 32845 scope.go:117] "RemoveContainer" containerID="f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd"
E0530 12:44:52.256789 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"instant-push\" with CrashLoopBackOff: \"back-off 10s restarting failed container=instant-push pod=instant-push-replica-65c76ccd7c-h864k_default(e287e774-829b-4788-b958-8829168f0364)\"" pod="default/instant-push-replica-65c76ccd7c-h864k" podUID="e287e774-829b-4788-b958-8829168f0364"
I0530 12:44:52.294586 32845 scope.go:117] "RemoveContainer" containerID="0f53a3e18cbfa49e0c3058376cbc637d6aef3cadbef2eb85bbdc970c10eff7dd"
I0530 12:44:52.440157 32845 scope.go:117] "RemoveContainer" containerID="01a1e2d94ea2fbe55d2bb3c2e60273246c6fc44ad6ce4c4962ef0bae9a3b778d"
I0530 12:44:52.587135 32845 scope.go:117] "RemoveContainer" containerID="480af378d82daf3e4af7bbb3508fc0bbfceb3b8513328c35621141923dadd202"
I0530 12:44:52.666850 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
E0530 12:44:52.668235 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:44:56.668080188 -0400 EDT m=+18.961453121 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
INFO[0015] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error
I0530 12:44:53.308126 32845 scope.go:117] "RemoveContainer" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
I0530 12:44:53.308803 32845 scope.go:117] "RemoveContainer" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
E0530 12:44:53.309477 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with CrashLoopBackOff: \"back-off 10s restarting failed container=postgres pod=forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031)\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
E0530 12:44:53.309828 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
I0530 12:44:54.314957 32845 scope.go:117] "RemoveContainer" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
E0530 12:44:54.316838 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with CrashLoopBackOff: \"back-off 10s restarting failed container=postgres pod=forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031)\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
I0530 12:44:56.733593 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
E0530 12:44:56.734096 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:45:04.733943049 -0400 EDT m=+27.027316092 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
INFO[0020] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error
INFO[0020] Tunnel authorizer set Kubelet Port 10250
I0530 12:44:59.623812 32845 scope.go:117] "RemoveContainer" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
E0530 12:44:59.626053 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with CrashLoopBackOff: \"back-off 10s restarting failed container=postgres pod=forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031)\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
I0530 12:45:00.309448 32845 scope.go:117] "RemoveContainer" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
I0530 12:45:00.372345 32845 scope.go:117] "RemoveContainer" containerID="453b19f7029337f01ca4ebd651569ce1d9161be6e4f12eee1d7fc075683cb1d5"
E0530 12:45:00.373572 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgres\" with CrashLoopBackOff: \"back-off 10s restarting failed container=postgres pod=forex-pg-c0-2_default(fe303a02-b17f-48ef-bfd7-28d0fbfd6031)\"" pod="default/forex-pg-c0-2" podUID="fe303a02-b17f-48ef-bfd7-28d0fbfd6031"
I0530 12:45:02.399381 32845 scope.go:117] "RemoveContainer" containerID="d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971"
I0530 12:45:03.400176 32845 scope.go:117] "RemoveContainer" containerID="3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404"
INFO[0026] Slow SQL (started: 2024-05-30 12:45:00.805428213 -0400 EDT m=+23.098801127) (total time: 3.595498911s): INSERT INTO kine(name, created, deleted, create_revision, prev_revision, lease, value, old_value) values(?, ?, ?, ?, ?, ?, ?, ?) : [[/registry/events/kube-system/coredns-59b4f5bbd5-cznj7.17d451f9b0cf90a6 0 0 82809400 82809457 3660 [107 56 115 0 10 11 10 2 118 49 18 5 69 118 101 110 116 18 149 5 10 252 2 10 41 99 111 114 101 100 110 115 45 53 57 98 52 102 53 98 98 100 53 45 99 122 110 106 55 46 49 55 100 52 53 49 102 57 98 48 99 102 57 48 97 54 18 0 26 11 107 117 98 101 45 115 121 115 116 101 109 34 0 42 36 52 99 54 54 97 56 49 48 45 100 56 56 101 45 52 100 48 51 45 97 49 98 55 45 50 56 99 56 51 56 55 99 56 100 98 100 50 0 56 0 66 8 8 130 218 226 178 6 16 0 138 1 136 2 10 9 107 51 115 45 97 114 109 54 52 18 6 85 112 100 97 116 101 26 2 118 49 34 8 8 140 218 226 178 6 16 0 50 8 70 105 101 108 100 115 86 49 58 216 1 10 213 1 123 34 102 58 99 111 117 110 116 34 58 123 125 44 34 102 58 102 105 114 115 116 84 105 109 101 115 116 97 109 112 34 58 123 125 44 34 102 58 105 110 118 111 108 118 101 100 79 98 106 101 99 116 34 58 123 125 44 34 102 58 108 97 115 116 84 105 109 101 115 116 97 109 112 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 114 101 112 111 114 116 105 110 103 67 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 114 101 112 111 114 116 105 110 103 73 110 115 116 97 110 99 101 34 58 123 125 44 34 102 58 115 111 117 114 99 101 34 58 123 34 102 58 99 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 104 111 115 116 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 66 0 18 122 10 3 80 111 100 18 11 107 117 98 101 45 115 121 115 116 101 109 26 24 99 111 114 101 100 110 115 45 53 57 98 52 102 53 98 98 100 53 45 99 122 110 106 55 34 36 57 50 98 99 51 56 54 53 45 53 102 54 102 45 52 49 52 52 45 57 98 55 56 45 57 49 56 97 57 97 100 49 50 57 48 99 42 2 118 49 50 8 56 50 56 48 56 57 49 53 58 24 115 112 101 99 46 99 111 110 116 97 105 110 101 114 115 123 99 111 114 101 100 110 115 125 26 9 85 110 104 101 97 108 116 104 121 34 62 82 101 97 100 105 110 101 115 115 32 112 114 111 98 101 32 102 97 105 108 101 100 58 32 72 84 84 80 32 112 114 111 98 101 32 102 97 105 108 101 100 32 119 105 116 104 32 115 116 97 116 117 115 99 111 100 101 58 32 53 48 51 42 21 10 7 107 117 98 101 108 101 116 18 10 104 97 114 114 121 122 99 121 45 51 50 8 8 130 218 226 178 6 16 0 58 8 8 140 218 226 178 6 16 0 64 8 74 7 87 97 114 110 105 110 103 82 0 98 0 114 7 107 117 98 101 108 101 116 122 10 104 97 114 114 121 122 99 121 45 51 26 0 34 0] [107 56 115 0 10 11 10 2 118 49 18 5 69 118 101 110 116 18 149 5 10 252 2 10 41 99 111 114 101 100 110 115 45 53 57 98 52 102 53 98 98 100 53 45 99 122 110 106 55 46 49 55 100 52 53 49 102 57 98 48 99 102 57 48 97 54 18 0 26 11 107 117 98 101 45 115 121 115 116 101 109 34 0 42 36 52 99 54 54 97 56 49 48 45 100 56 56 101 45 52 100 48 51 45 97 49 98 55 45 50 56 99 56 51 56 55 99 56 100 98 100 50 0 56 0 66 8 8 130 218 226 178 6 16 0 138 1 136 2 10 9 107 51 115 45 97 114 109 54 52 18 6 85 112 100 97 116 101 26 2 118 49 34 8 8 138 218 226 178 6 16 0 50 8 70 105 101 108 100 115 86 49 58 216 1 10 213 1 123 34 102 58 99 111 117 110 116 34 58 123 125 44 34 102 58 102 105 114 115 116 84 105 109 101 115 116 97 109 112 34 58 123 125 44 34 102 58 105 110 118 111 108 118 101 100 79 98 106 101 99 116 34 58 123 125 44 34 102 58 108 97 115 116 84 105 109 101 115 116 97 109 112 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 114 101 112 111 114 116 105 110 103 67 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 114 101 112 111 114 116 105 110 103 73 110 115 116 97 110 99 101 34 58 123 125 44 34 102 58 115 111 117 114 99 101 34 58 123 34 102 58 99 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 104 111 115 116 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 66 0 18 122 10 3 80 111 100 18 11 107 117 98 101 45 115 121 115 116 101 109 26 24 99 111 114 101 100 110 115 45 53 57 98 52 102 53 98 98 100 53 45 99 122 110 106 55 34 36 57 50 98 99 51 56 54 53 45 53 102 54 102 45 52 49 52 52 45 57 98 55 56 45 57 49 56 97 57 97 100 49 50 57 48 99 42 2 118 49 50 8 56 50 56 48 56 57 49 53 58 24 115 112 101 99 46 99 111 110 116 97 105 110 101 114 115 123 99 111 114 101 100 110 115 125 26 9 85 110 104 101 97 108 116 104 121 34 62 82 101 97 100 105 110 101 115 115 32 112 114 111 98 101 32 102 97 105 108 101 100 58 32 72 84 84 80 32 112 114 111 98 101 32 102 97 105 108 101 100 32 119 105 116 104 32 115 116 97 116 117 115 99 111 100 101 58 32 53 48 51 42 21 10 7 107 117 98 101 108 101 116 18 10 104 97 114 114 121 122 99 121 45 51 50 8 8 130 218 226 178 6 16 0 58 8 8 138 218 226 178 6 16 0 64 7 74 7 87 97 114 110 105 110 103 82 0 98 0 114 7 107 117 98 101 108 101 116 122 10 104 97 114 114 121 122 99 121 45 51 26 0 34 0]]]
I0530 12:45:04.402254 32845 scope.go:117] "RemoveContainer" containerID="b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e"
I0530 12:45:04.403519 32845 trace.go:236] Trace[696534801]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2ea9afcd-edaf-4b8e-8610-0a8a1c52f521,client:127.0.0.1,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events/coredns-59b4f5bbd5-cznj7.17d451f9b0cf90a6,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:PATCH (30-May-2024 12:45:00.794) (total time: 3609ms):
Trace[696534801]: ["GuaranteedUpdate etcd3" audit-id:2ea9afcd-edaf-4b8e-8610-0a8a1c52f521,key:/events/kube-system/coredns-59b4f5bbd5-cznj7.17d451f9b0cf90a6,type:*core.Event,resource:events 3609ms (12:45:00.794)
Trace[696534801]: ---"Txn call completed" 3601ms (12:45:04.402)]
Trace[696534801]: ---"Object stored in database" 3601ms (12:45:04.403)
Trace[696534801]: [3.609281962s] [3.609281962s] END
INFO[0026] Slow SQL (started: 2024-05-30 12:45:03.352433137 -0400 EDT m=+25.645806107) (total time: 1.057888994s): SELECT * FROM ( SELECT ( SELECT MAX(rkv.id) AS id FROM kine AS rkv), ( SELECT MAX(crkv.prev_revision) AS prev_revision FROM kine AS crkv WHERE crkv.name = 'compact_rev_key'), kv.id AS theid, kv.name, kv.created, kv.deleted, kv.create_revision, kv.prev_revision, kv.lease, kv.value, kv.old_value FROM kine AS kv JOIN ( SELECT MAX(mkv.id) AS id FROM kine AS mkv WHERE mkv.name LIKE ? GROUP BY mkv.name) AS maxkv ON maxkv.id = kv.id WHERE kv.deleted = 0 OR ? ) AS lkv ORDER BY lkv.theid ASC LIMIT 1 : [[/registry/leases/kube-system/apiserver-wripr5zuxnlxme5dfy2svf27ua false]]
INFO[0026] Slow SQL (started: 2024-05-30 12:45:02.402967412 -0400 EDT m=+24.696340363) (total time: 2.008656242s): SELECT * FROM ( SELECT ( SELECT MAX(rkv.id) AS id FROM kine AS rkv), ( SELECT MAX(crkv.prev_revision) AS prev_revision FROM kine AS crkv WHERE crkv.name = 'compact_rev_key'), kv.id AS theid, kv.name, kv.created, kv.deleted, kv.create_revision, kv.prev_revision, kv.lease, kv.value, kv.old_value FROM kine AS kv JOIN ( SELECT MAX(mkv.id) AS id FROM kine AS mkv WHERE mkv.name LIKE ? GROUP BY mkv.name) AS maxkv ON maxkv.id = kv.id WHERE kv.deleted = 0 OR ? ) AS lkv ORDER BY lkv.theid ASC : [[/registry/pods/longhorn-system/csi-provisioner-667796df57-gh85f false]]
I0530 12:45:04.424703 32845 trace.go:236] Trace[492704599]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a8d714c2-8ffa-455f-9ffe-c68c1ec0e4fa,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-wripr5zuxnlxme5dfy2svf27ua,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:PUT (30-May-2024 12:45:03.348) (total time: 1076ms):
Trace[492704599]: ["GuaranteedUpdate etcd3" audit-id:a8d714c2-8ffa-455f-9ffe-c68c1ec0e4fa,key:/leases/kube-system/apiserver-wripr5zuxnlxme5dfy2svf27ua,type:*coordination.Lease,resource:leases.coordination.k8s.io 1075ms (12:45:03.348)
Trace[492704599]: ---"Txn call completed" 1072ms (12:45:04.424)]
Trace[492704599]: [1.076050552s] [1.076050552s] END
I0530 12:45:04.431722 32845 trace.go:236] Trace[1012493379]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:51fe896c-89e3-4586-af32-5662ee04a1f3,client:127.0.0.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/longhorn-system/pods/csi-provisioner-667796df57-gh85f,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:GET (30-May-2024 12:45:02.402) (total time: 2029ms):
Trace[1012493379]: ---"About to write a response" 2024ms (12:45:04.426)
Trace[1012493379]: [2.029401495s] [2.029401495s] END
INFO[0026] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error
I0530 12:45:04.753309 32845 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c\") pod \"116fd16b-b81c-4750-a0be-682ed21c14f6\" (UID: \"116fd16b-b81c-4750-a0be-682ed21c14f6\") "
E0530 12:45:04.753683 32845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c podName:116fd16b-b81c-4750-a0be-682ed21c14f6 nodeName:}" failed. No retries permitted until 2024-05-30 12:45:20.753558937 -0400 EDT m=+43.046931888 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-29339ee5-93ea-4b92-b811-1af27fbf8f8c") pod "116fd16b-b81c-4750-a0be-682ed21c14f6" (UID: "116fd16b-b81c-4750-a0be-682ed21c14f6") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name driver.longhorn.io not found in the list of registered CSI drivers
I0530 12:45:05.399501 32845 scope.go:117] "RemoveContainer" containerID="4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2"
I0530 12:45:06.399402 32845 scope.go:117] "RemoveContainer" containerID="ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618"
I0530 12:45:07.401076 32845 scope.go:117] "RemoveContainer" containerID="f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd"
INFO[0030] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error
I0530 12:45:07.975396 32845 trace.go:236] Trace[1044736132]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6d425283-aec9-4c8b-8278-349bdd5a217d,client:127.0.0.1,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events/instant-push-replica-65c76ccd7c-h864k.17d451f9541a44d7,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:PATCH (30-May-2024 12:45:07.412) (total time: 562ms):
Trace[1044736132]: ["GuaranteedUpdate etcd3" audit-id:6d425283-aec9-4c8b-8278-349bdd5a217d,key:/events/default/instant-push-replica-65c76ccd7c-h864k.17d451f9541a44d7,type:*core.Event,resource:events 563ms (12:45:07.412)
Trace[1044736132]: ---"Txn call completed" 553ms (12:45:07.974)]
Trace[1044736132]: ---"Object stored in database" 554ms (12:45:07.974)
Trace[1044736132]: [562.934829ms] [562.934829ms] END
I0530 12:45:08.084161 32845 trace.go:236] Trace[1311145174]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d8c23bf-d87d-4b20-9618-7dc355210243,client:127.0.0.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/default/pods/instant-push-replica-65c76ccd7c-h864k/status,user-agent:k3s-arm64/v1.28.10+k3s1 (linux/arm64) kubernetes/a4c5612,verb:PATCH (30-May-2024 12:45:07.421) (total time: 662ms):
Trace[1311145174]: ["GuaranteedUpdate etcd3" audit-id:7d8c23bf-d87d-4b20-9618-7dc355210243,key:/pods/default/instant-push-replica-65c76ccd7c-h864k,type:*core.Pod,resource:pods 662ms (12:45:07.421)
Trace[1311145174]: ---"Txn call completed" 652ms (12:45:08.081)]
Trace[1311145174]: ---"Object stored in database" 654ms (12:45:08.082)
Trace[1311145174]: [662.74621ms] [662.74621ms] END
E0530 12:45:08.266280 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f\" instead: unknown" containerID="a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f"
E0530 12:45:08.266807 32845 kuberuntime_manager.go:1261] container &Container{Name:cert-manager-webhook,Image:quay.io/jetstack/cert-manager-webhook:v1.13.2,Command:[],Args:[--v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE).svc],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xktmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f" instead: unknown
E0530 12:45:08.267126 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/pod5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df/a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f\\\" instead: unknown\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
I0530 12:45:08.467717 32845 scope.go:117] "RemoveContainer" containerID="b479f6bb9691e33cec6b4cd2370af439e0fc35f64a438da6f2240efb544d0e61"
I0530 12:45:08.468167 32845 scope.go:117] "RemoveContainer" containerID="a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f"
E0530 12:45:08.469327 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
I0530 12:45:08.505961 32845 scope.go:117] "RemoveContainer" containerID="62780715d01072ddd2258dac7c6772309dd2bc549ac980ded65f1a0f937d4612"
E0530 12:45:08.506983 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-csi-plugin\" with CrashLoopBackOff: \"back-off 10s restarting failed container=longhorn-csi-plugin pod=longhorn-csi-plugin-xsc9d_longhorn-system(887b3014-f33a-4176-bc1c-5107e9d2ab8f)\"" pod="longhorn-system/longhorn-csi-plugin-xsc9d" podUID="887b3014-f33a-4176-bc1c-5107e9d2ab8f"
I0530 12:45:08.773710 32845 scope.go:117] "RemoveContainer" containerID="2d6a6c86f2ec446eb0fcd5bef8eb95c534fdfa73f886031b86d321c0563a950d"
INFO[0031] Waiting for API server to become available
INFO[0031] Waiting for API server to become available
E0530 12:45:08.995220 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/0b3fb2b47b862a0ec6489f0faaa175107353040e7c4a6888fca2da8074f30bf8\" instead: unknown" containerID="0b3fb2b47b862a0ec6489f0faaa175107353040e7c4a6888fca2da8074f30bf8"
E0530 12:45:08.995540 32845 kuberuntime_manager.go:1261] container &Container{Name:longhorn-ui,Image:longhornio/longhorn-ui:v1.6.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:LONGHORN_MANAGER_IP,Value:http://longhorn-backend:9500,ValueFrom:nil,},EnvVar{Name:LONGHORN_UI_PORT,Value:8000,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:nginx-cache,ReadOnly:false,MountPath:/var/cache/nginx/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:nginx-config,ReadOnly:false,MountPath:/var/config/nginx/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vwnsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod longhorn-ui-6d89c47858-djwxb_longhorn-system(fb26427c-2761-4f44-bc60-3b6c59bd8fa7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/0b3fb2b47b862a0ec6489f0faaa175107353040e7c4a6888fca2da8074f30bf8" instead: unknown
E0530 12:45:08.995696 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-ui\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podfb26427c-2761-4f44-bc60-3b6c59bd8fa7/0b3fb2b47b862a0ec6489f0faaa175107353040e7c4a6888fca2da8074f30bf8\\\" instead: unknown\"" pod="longhorn-system/longhorn-ui-6d89c47858-djwxb" podUID="fb26427c-2761-4f44-bc60-3b6c59bd8fa7"
E0530 12:45:09.059336 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/6fc88dd59dd31b918c38f5d01cdb3ebfe924c8da9885ccf2f50fcb61cc102eca\" instead: unknown" containerID="6fc88dd59dd31b918c38f5d01cdb3ebfe924c8da9885ccf2f50fcb61cc102eca"
E0530 12:45:09.059619 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-resizer,Image:longhornio/csi-resizer:v1.9.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE) --leader-election-namespace=$(POD_NAMESPACE) --handle-volume-inuse-error=false],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fc5w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-resizer-694f8f5f64-8nbbh_longhorn-system(2e6d4d1b-f46a-4d23-be82-9784b9e34a37): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/6fc88dd59dd31b918c38f5d01cdb3ebfe924c8da9885ccf2f50fcb61cc102eca" instead: unknown
E0530 12:45:09.059768 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-resizer\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/pod2e6d4d1b-f46a-4d23-be82-9784b9e34a37/6fc88dd59dd31b918c38f5d01cdb3ebfe924c8da9885ccf2f50fcb61cc102eca\\\" instead: unknown\"" pod="longhorn-system/csi-resizer-694f8f5f64-8nbbh" podUID="2e6d4d1b-f46a-4d23-be82-9784b9e34a37"
E0530 12:45:09.187905 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/dcf66b703245c1162b2cdffe88925b1535a51ab1d678a058e08d82630b285a20\" instead: unknown" containerID="dcf66b703245c1162b2cdffe88925b1535a51ab1d678a058e08d82630b285a20"
E0530 12:45:09.188316 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-snapshotter,Image:longhornio/csi-snapshotter:v6.3.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4l89b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshotter-959b69d4b-k6rtg_longhorn-system(b79d7d86-2a71-496a-9002-242328ec6c13): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/dcf66b703245c1162b2cdffe88925b1535a51ab1d678a058e08d82630b285a20" instead: unknown
E0530 12:45:09.188562 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshotter\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podb79d7d86-2a71-496a-9002-242328ec6c13/dcf66b703245c1162b2cdffe88925b1535a51ab1d678a058e08d82630b285a20\\\" instead: unknown\"" pod="longhorn-system/csi-snapshotter-959b69d4b-k6rtg" podUID="b79d7d86-2a71-496a-9002-242328ec6c13"
E0530 12:45:09.313149 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/pode287e774-829b-4788-b958-8829168f0364/0d9993a677da7a4ce5803524e5679053810ead7f7bc032972259bdfccf85a36d\" instead: unknown" containerID="0d9993a677da7a4ce5803524e5679053810ead7f7bc032972259bdfccf85a36d"
E0530 12:45:09.313640 32845 kuberuntime_manager.go:1261] container &Container{Name:instant-push,Image:registry.zcy.dev/instant-push:2.6.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8087,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:INSTANCE_TYPE,Value:replica,ValueFrom:nil,},EnvVar{Name:MASTER_HOST,Value:instant-push.default.svc.cluster.local,ValueFrom:nil,},EnvVar{Name:AUTHELIA_DOMAIN,Value:https://auth.zcy.dev,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{50 -3} {<nil>} 50m DecimalSI},memory: {{16777216 0} {<nil>} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {<nil>} 50m DecimalSI},memory: {{16777216 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data-volume,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9zkvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:nil,SecretRef:&SecretEnvSource{LocalObjectReference:LocalObjectReference{Name:instant-push-secret-g8gcf24c5h,},Optional:nil,},},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod instant-push-replica-65c76ccd7c-h864k_default(e287e774-829b-4788-b958-8829168f0364): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/pode287e774-829b-4788-b958-8829168f0364/0d9993a677da7a4ce5803524e5679053810ead7f7bc032972259bdfccf85a36d" instead: unknown
E0530 12:45:09.313919 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"instant-push\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/pode287e774-829b-4788-b958-8829168f0364/0d9993a677da7a4ce5803524e5679053810ead7f7bc032972259bdfccf85a36d\\\" instead: unknown\"" pod="default/instant-push-replica-65c76ccd7c-h864k" podUID="e287e774-829b-4788-b958-8829168f0364"
E0530 12:45:09.369949 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/6bb7b425cdfc599d1f0958992964758141f034fbed8f93c1d05c481a4ab70e89\" instead: unknown" containerID="6bb7b425cdfc599d1f0958992964758141f034fbed8f93c1d05c481a4ab70e89"
E0530 12:45:09.370354 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-provisioner,Image:longhornio/csi-provisioner:v3.6.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE) --default-fstype=ext4],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pzjgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-provisioner-667796df57-gh85f_longhorn-system(f23e23a4-6c6f-4b2a-9293-60513a318002): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/6bb7b425cdfc599d1f0958992964758141f034fbed8f93c1d05c481a4ab70e89" instead: unknown
E0530 12:45:09.370622 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podf23e23a4-6c6f-4b2a-9293-60513a318002/6bb7b425cdfc599d1f0958992964758141f034fbed8f93c1d05c481a4ab70e89\\\" instead: unknown\"" pod="longhorn-system/csi-provisioner-667796df57-gh85f" podUID="f23e23a4-6c6f-4b2a-9293-60513a318002"
E0530 12:45:09.397521 32845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/2f784d886d9360652269e219a9f5f4294a3de26ff3d16681faaace65412258e8\" instead: unknown" containerID="2f784d886d9360652269e219a9f5f4294a3de26ff3d16681faaace65412258e8"
E0530 12:45:09.397762 32845 kuberuntime_manager.go:1261] container &Container{Name:csi-attacher,Image:longhornio/csi-attacher:v4.4.2,Command:[],Args:[--v=2 --csi-address=$(ADDRESS) --timeout=1m50s --leader-election --leader-election-namespace=$(POD_NAMESPACE)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g4kmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-attacher-5c4bfdcf59-llv6p_longhorn-system(b0049f06-952d-492b-8474-c83f1be671a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/2f784d886d9360652269e219a9f5f4294a3de26ff3d16681faaace65412258e8" instead: unknown
E0530 12:45:09.397921 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-attacher\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format \\\"slice:prefix:name\\\" for systemd cgroups, got \\\"/kubepods/besteffort/podb0049f06-952d-492b-8474-c83f1be671a3/2f784d886d9360652269e219a9f5f4294a3de26ff3d16681faaace65412258e8\\\" instead: unknown\"" pod="longhorn-system/csi-attacher-5c4bfdcf59-llv6p" podUID="b0049f06-952d-492b-8474-c83f1be671a3"
I0530 12:45:09.520576 32845 scope.go:117] "RemoveContainer" containerID="3097da2a989316543db0b463137d71d0d371b175dd6a9df850f571d277bc8404"
I0530 12:45:09.522375 32845 scope.go:117] "RemoveContainer" containerID="6fc88dd59dd31b918c38f5d01cdb3ebfe924c8da9885ccf2f50fcb61cc102eca"
E0530 12:45:09.524023 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-resizer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-resizer pod=csi-resizer-694f8f5f64-8nbbh_longhorn-system(2e6d4d1b-f46a-4d23-be82-9784b9e34a37)\"" pod="longhorn-system/csi-resizer-694f8f5f64-8nbbh" podUID="2e6d4d1b-f46a-4d23-be82-9784b9e34a37"
I0530 12:45:09.532708 32845 scope.go:117] "RemoveContainer" containerID="0d9993a677da7a4ce5803524e5679053810ead7f7bc032972259bdfccf85a36d"
E0530 12:45:09.535383 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"instant-push\" with CrashLoopBackOff: \"back-off 20s restarting failed container=instant-push pod=instant-push-replica-65c76ccd7c-h864k_default(e287e774-829b-4788-b958-8829168f0364)\"" pod="default/instant-push-replica-65c76ccd7c-h864k" podUID="e287e774-829b-4788-b958-8829168f0364"
I0530 12:45:09.541167 32845 scope.go:117] "RemoveContainer" containerID="a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f"
E0530 12:45:09.542207 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
I0530 12:45:09.551788 32845 scope.go:117] "RemoveContainer" containerID="6bb7b425cdfc599d1f0958992964758141f034fbed8f93c1d05c481a4ab70e89"
E0530 12:45:09.553427 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-provisioner pod=csi-provisioner-667796df57-gh85f_longhorn-system(f23e23a4-6c6f-4b2a-9293-60513a318002)\"" pod="longhorn-system/csi-provisioner-667796df57-gh85f" podUID="f23e23a4-6c6f-4b2a-9293-60513a318002"
I0530 12:45:09.563635 32845 scope.go:117] "RemoveContainer" containerID="f7900459b1f907e5b6fe4e989e2cb1db3e970b6b9a3b43b741de0479e42bf0fd"
I0530 12:45:09.567993 32845 scope.go:117] "RemoveContainer" containerID="0b3fb2b47b862a0ec6489f0faaa175107353040e7c4a6888fca2da8074f30bf8"
E0530 12:45:09.569051 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-ui\" with CrashLoopBackOff: \"back-off 20s restarting failed container=longhorn-ui pod=longhorn-ui-6d89c47858-djwxb_longhorn-system(fb26427c-2761-4f44-bc60-3b6c59bd8fa7)\"" pod="longhorn-system/longhorn-ui-6d89c47858-djwxb" podUID="fb26427c-2761-4f44-bc60-3b6c59bd8fa7"
I0530 12:45:09.593240 32845 scope.go:117] "RemoveContainer" containerID="62780715d01072ddd2258dac7c6772309dd2bc549ac980ded65f1a0f937d4612"
E0530 12:45:09.594462 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"longhorn-csi-plugin\" with CrashLoopBackOff: \"back-off 10s restarting failed container=longhorn-csi-plugin pod=longhorn-csi-plugin-xsc9d_longhorn-system(887b3014-f33a-4176-bc1c-5107e9d2ab8f)\"" pod="longhorn-system/longhorn-csi-plugin-xsc9d" podUID="887b3014-f33a-4176-bc1c-5107e9d2ab8f"
I0530 12:45:09.600867 32845 scope.go:117] "RemoveContainer" containerID="2f784d886d9360652269e219a9f5f4294a3de26ff3d16681faaace65412258e8"
E0530 12:45:09.601703 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-attacher\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-attacher pod=csi-attacher-5c4bfdcf59-llv6p_longhorn-system(b0049f06-952d-492b-8474-c83f1be671a3)\"" pod="longhorn-system/csi-attacher-5c4bfdcf59-llv6p" podUID="b0049f06-952d-492b-8474-c83f1be671a3"
I0530 12:45:09.604750 32845 scope.go:117] "RemoveContainer" containerID="d84172657088c768f99088ae9425ac40a8a711c33b78a6d923a6c28eddc7b971"
I0530 12:45:09.610272 32845 scope.go:117] "RemoveContainer" containerID="dcf66b703245c1162b2cdffe88925b1535a51ab1d678a058e08d82630b285a20"
E0530 12:45:09.611113 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshotter\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-snapshotter pod=csi-snapshotter-959b69d4b-k6rtg_longhorn-system(b79d7d86-2a71-496a-9002-242328ec6c13)\"" pod="longhorn-system/csi-snapshotter-959b69d4b-k6rtg" podUID="b79d7d86-2a71-496a-9002-242328ec6c13"
I0530 12:45:09.636705 32845 scope.go:117] "RemoveContainer" containerID="b8082cfa48644d810fd7f3a7292a5d63c52ea4abc69673bbcd36f3838430620e"
I0530 12:45:09.668392 32845 scope.go:117] "RemoveContainer" containerID="ad4bd79a8f54c4b67e4cc455ed0d026a13e1c416c12fd99385084b5a4e5d8618"
I0530 12:45:09.696441 32845 scope.go:117] "RemoveContainer" containerID="4e9863e932c90c4817d620511669405a9eccad5874941e6ff2e5e337efa87fc2"
I0530 12:45:10.647397 32845 scope.go:117] "RemoveContainer" containerID="a3f51b7d23e2ccf627bd46a98eceea1e004c3ec321ce319acf98acea998ae72f"
E0530 12:45:10.648522 32845 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-webhook pod=cert-manager-webhook-649b4d699f-bgkjm_cert-manager(5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df)\"" pod="cert-manager/cert-manager-webhook-649b4d699f-bgkjm" podUID="5f9fa0d6-ede5-4a6c-bfbf-686d1d8c73df"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment