Skip to content

Instantly share code, notes, and snippets.

@brandonros
Created November 5, 2023 20:42
Show Gist options
  • Save brandonros/69802a1677ab358db4071ab7efa8ea94 to your computer and use it in GitHub Desktop.
Save brandonros/69802a1677ab358db4071ab7efa8ea94 to your computer and use it in GitHub Desktop.
root@vultr:~# journalctl -u k3s
Nov 05 20:29:09 vultr systemd[1]: Starting k3s.service - Lightweight Kubernetes...
Nov 05 20:29:09 vultr sh[18362]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Nov 05 20:29:09 vultr systemctl[18363]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Nov 05 20:29:09 vultr k3s[18368]: time="2023-11-05T20:29:09Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Nov 05 20:29:09 vultr k3s[18368]: time="2023-11-05T20:29:09Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/e82313669fe2739df53b3870076163d1fe7785336a68b4771685219e51c9785d"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Starting k3s v1.27.7+k3s1 (b6f23014)"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Database tables and indexes are up to date"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Kine available at unix://kine.sock"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12.214874605 +0000 UTC notAfter=2033-11-02 20:29:12.214874605 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:k3s-supervisor,O=system:masters signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1699216152: notBefore=2023-11-05 20:29:12.221417244 +0000 UTC notAfter=2033-11-02 20:29:12.221417244 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1699216152: notBefore=2023-11-05 20:29:12.222832374 +0000 UTC notAfter=2033-11-02 20:29:12.222832374 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1699216152: notBefore=2023-11-05 20:29:12.224050279 +0000 UTC notAfter=2033-11-02 20:29:12.224050279 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1699216152: notBefore=2023-11-05 20:29:12.225266992 +0000 UTC notAfter=2033-11-02 20:29:12.225266992 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Saving cluster bootstrap data to datastore"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:12 +0000 UTC"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-207.148.30.37:207.148.30.37 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernet>
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s>
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s>
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Waiting for API server to become available"
Nov 05 20:29:12 vultr k3s[18368]: W1105 20:29:12.663643 18368 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0>
Nov 05 20:29:12 vultr k3s[18368]: I1105 20:29:12.664350 18368 server.go:568] external host was not specified, using 207.148.30.37
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind->
Nov 05 20:29:12 vultr k3s[18368]: I1105 20:29:12.665619 18368 server.go:174] Version: v1.27.7+k3s1
Nov 05 20:29:12 vultr k3s[18368]: I1105 20:29:12.665670 18368 server.go:176] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="To join server node to cluster: k3s server -s https://207.148.30.37:6443 -t ${SERVER_NODE_TOKEN}"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="To join agent node to cluster: k3s agent -s https://207.148.30.37:6443 -t ${AGENT_NODE_TOKEN}"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 05 20:29:12 vultr k3s[18368]: time="2023-11-05T20:29:12Z" level=info msg="Run: k3s kubectl"
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.117424 18368 shared_informer.go:311] Waiting for caches to sync for node_authorizer
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.130033 18368 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtec>
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.130072 18368 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,Certifi>
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.138899 18368 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.138938 18368 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.140449 18368 instance.go:282] Using reconciler: lease
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.246446 18368 handler.go:232] Adding GroupVersion v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.246843 18368 instance.go:651] API group "internal.apiserver.k8s.io" is not enabled, skipping.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.429355 18368 instance.go:651] API group "resource.k8s.io" is not enabled, skipping.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.445586 18368 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.445660 18368 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.445678 18368 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.446581 18368 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.446623 18368 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.448270 18368 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.449685 18368 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.449722 18368 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.449734 18368 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.452387 18368 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.452433 18368 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.453748 18368 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.453784 18368 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.453794 18368 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.454649 18368 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.454680 18368 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.454748 18368 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.455738 18368 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.458718 18368 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.458758 18368 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.458768 18368 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.459464 18368 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.459489 18368 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.459499 18368 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.460873 18368 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.460904 18368 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.464080 18368 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.464130 18368 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.464142 18368 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.464807 18368 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.464845 18368 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.464854 18368 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.468066 18368 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.468108 18368 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.468118 18368 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.469841 18368 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.471493 18368 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.471516 18368 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.471523 18368 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.476573 18368 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.476624 18368 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.476634 18368 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.477857 18368 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.477887 18368 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.477894 18368 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.478582 18368 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.478614 18368 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: I1105 20:29:13.483027 18368 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
Nov 05 20:29:13 vultr k3s[18368]: W1105 20:29:13.483064 18368 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
Nov 05 20:29:13 vultr k3s[18368]: time="2023-11-05T20:29:13Z" level=info msg="Password verified locally for node vultr"
Nov 05 20:29:13 vultr k3s[18368]: time="2023-11-05T20:29:13Z" level=info msg="certificate CN=vultr signed by CN=k3s-server-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:13 +0000 UTC"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="certificate CN=system:node:vultr,O=system:nodes signed by CN=k3s-client-ca@1699216152: notBefore=2023-11-05 20:29:12 +0000 UTC notAfter=2024-11-04 20:29:14 +0000 UTC"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Module overlay was already loaded"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Module nf_conntrack was already loaded"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Module br_netfilter was already loaded"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 131072"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Set sysctl 'net/ipv4/conf/all/forwarding' to 1"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.563588 18368 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.563626 18368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.563741 18368 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.564078 18368 secure_serving.go:213] Serving securely on 127.0.0.1:6444
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.564096 18368 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.565645 18368 gc_controller.go:78] Starting apiserver lease garbage collector
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.565872 18368 customresource_discovery_controller.go:289] Starting DiscoveryController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566021 18368 gc_controller.go:78] Starting apiserver lease garbage collector
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566497 18368 available_controller.go:423] Starting AvailableConditionController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566520 18368 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566550 18368 aggregator.go:150] waiting for initial CRD sync...
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566596 18368 controller.go:80] Starting OpenAPI V3 AggregationController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566630 18368 controller.go:83] Starting OpenAPI AggregationController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566692 18368 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.566707 18368 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567049 18368 apf_controller.go:373] Starting API Priority and Fairness config controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567073 18368 apiservice_controller.go:97] Starting APIServiceRegistrationController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567554 18368 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567104 18368 system_namespaces_controller.go:67] Starting system namespaces controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567601 18368 controller.go:121] Starting legacy_token_tracking_controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.567619 18368 shared_informer.go:311] Waiting for caches to sync for configmaps
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568796 18368 controller.go:85] Starting OpenAPI controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568856 18368 controller.go:85] Starting OpenAPI V3 controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568893 18368 naming_controller.go:291] Starting NamingConditionController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568923 18368 establishing_controller.go:76] Starting EstablishingController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568956 18368 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.568994 18368 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.569018 18368 crd_finalizer.go:266] Starting CRDFinalizer
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.569095 18368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.569199 18368 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.570284 18368 handler_discovery.go:412] Starting ResourceDiscoveryManager
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.578207 18368 crdregistration_controller.go:111] Starting crd-autoregister controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.578248 18368 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.578294 18368 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.628818 18368 controller.go:624] quota admission added evaluator for: namespaces
Nov 05 20:29:14 vultr k3s[18368]: E1105 20:29:14.639301 18368 controller.go:152] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at>
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.667319 18368 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.667728 18368 shared_informer.go:318] Caches are synced for configmaps
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.669000 18368 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.669047 18368 apf_controller.go:378] Running API Priority and Fairness config worker
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.669056 18368 apf_controller.go:381] Running API Priority and Fairness periodic rebalancing process
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.669155 18368 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.678414 18368 shared_informer.go:318] Caches are synced for crd-autoregister
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.678624 18368 aggregator.go:152] initial CRD sync complete...
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.678637 18368 autoregister_controller.go:141] Starting autoregister controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.678647 18368 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.678662 18368 cache.go:39] Caches are synced for autoregister controller
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.696252 18368 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.718824 18368 shared_informer.go:318] Caches are synced for node_authorizer
Nov 05 20:29:14 vultr k3s[18368]: I1105 20:29:14.994884 18368 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 05 20:29:15 vultr k3s[18368]: time="2023-11-05T20:29:15Z" level=info msg="containerd is now running"
Nov 05 20:29:15 vultr k3s[18368]: I1105 20:29:15.572122 18368 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
Nov 05 20:29:15 vultr k3s[18368]: time="2023-11-05T20:29:15Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/li>
Nov 05 20:29:15 vultr k3s[18368]: time="2023-11-05T20:29:15Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Nov 05 20:29:15 vultr k3s[18368]: time="2023-11-05T20:29:15Z" level=info msg="Handling backend connection request [vultr]"
Nov 05 20:29:15 vultr k3s[18368]: I1105 20:29:15.581187 18368 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
Nov 05 20:29:15 vultr k3s[18368]: I1105 20:29:15.581220 18368 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Nov 05 20:29:15 vultr k3s[18368]: time="2023-11-05T20:29:15Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.181763 18368 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.240701 18368 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.355756 18368 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
Nov 05 20:29:16 vultr k3s[18368]: W1105 20:29:16.363199 18368 lease.go:251] Resetting endpoints for master service "kubernetes" to [207.148.30.37]
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.365648 18368 controller.go:624] quota admission added evaluator for: endpoints
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.372847 18368 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
Nov 05 20:29:16 vultr k3s[18368]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Nov 05 20:29:16 vultr k3s[18368]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Nov 05 20:29:16 vultr k3s[18368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.582083 18368 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.584035 18368 server.go:410] "Kubelet version" kubeletVersion="v1.27.7+k3s1"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.584281 18368 server.go:412] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.586324 18368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.591695 18368 server.go:657] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.592081 18368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.592193 18368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd Kub>
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.592249 18368 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.592280 18368 container_manager_linux.go:301] "Creating device plugin manager"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.592451 18368 state_mem.go:36] "Initialized new in-memory state store"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.594869 18368 kubelet.go:405] "Attempting to sync node with API server"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.594913 18368 kubelet.go:298] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.594965 18368 kubelet.go:309] "Adding apiserver pod source"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.594994 18368 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.595939 18368 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.7-k3s1.27" apiVersion="v1"
Nov 05 20:29:16 vultr k3s[18368]: W1105 20:29:16.596905 18368 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.597963 18368 server.go:1163] "Started kubelet"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Kube API server is now running"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="ETCD server is now running"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="k3s is up and running"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Creating k3s-supervisor event broadcaster"
Nov 05 20:29:16 vultr systemd[1]: Started k3s.service - Lightweight Kubernetes.
Nov 05 20:29:16 vultr k3s[18368]: E1105 20:29:16.600681 18368 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Nov 05 20:29:16 vultr k3s[18368]: E1105 20:29:16.600720 18368 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.604580 18368 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.606309 18368 server.go:461] "Adding debug handlers to kubelet server"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.608695 18368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.608765 18368 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Nov 05 20:29:16 vultr k3s[18368]: W1105 20:29:16.598000 18368 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.614405 18368 volume_manager.go:284] "Starting Kubelet Volume Manager"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.614546 18368 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Applying CRD helmcharts.helm.cattle.io"
Nov 05 20:29:16 vultr k3s[18368]: E1105 20:29:16.654623 18368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"vultr\" not found" node="vultr"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.659039 18368 cpu_manager.go:214] "Starting CPU manager" policy="none"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.659071 18368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.659100 18368 state_mem.go:36] "Initialized new in-memory state store"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.660906 18368 policy_none.go:49] "None policy: Start"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.664412 18368 memory_manager.go:169] "Starting memorymanager" policy="None"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.664445 18368 state_mem.go:35] "Initializing new in-memory state store"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.691214 18368 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.693403 18368 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.693450 18368 status_manager.go:207] "Starting to sync pod status with apiserver"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.693479 18368 kubelet.go:2257] "Starting kubelet main sync loop"
Nov 05 20:29:16 vultr k3s[18368]: E1105 20:29:16.693556 18368 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Applying CRD addons.k3s.cattle.io"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.719138 18368 kubelet_node_status.go:70] "Attempting to register node" node="vultr"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Applying CRD etcdsnapshotfiles.k3s.cattle.io"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.735173 18368 handler.go:232] Adding GroupVersion helm.cattle.io v1 to ResourceManager
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.744312 18368 handler.go:232] Adding GroupVersion helm.cattle.io v1 to ResourceManager
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.773654 18368 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.774182 18368 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Nov 05 20:29:16 vultr k3s[18368]: E1105 20:29:16.779623 18368 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"vultr\" not found"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.785221 18368 kubelet_node_status.go:73] "Successfully registered node" node="vultr"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Annotations and labels have been set successfully on node: vultr"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Starting flannel with backend vxlan"
Nov 05 20:29:16 vultr k3s[18368]: time="2023-11-05T20:29:16Z" level=info msg="Waiting for CRD etcdsnapshotfiles.k3s.cattle.io to become available"
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.824780 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.828885 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:29:16 vultr k3s[18368]: I1105 20:29:16.836109 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:29:17 vultr k3s[18368]: I1105 20:29:17.034115 18368 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Nov 05 20:29:17 vultr k3s[18368]: I1105 20:29:17.197167 18368 serving.go:355] Generated self-signed cert in-memory
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Done waiting for CRD etcdsnapshotfiles.k3s.cattle.io to become available"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
Nov 05 20:29:17 vultr k3s[18368]: I1105 20:29:17.595700 18368 apiserver.go:52] "Watching apiserver"
Nov 05 20:29:17 vultr k3s[18368]: I1105 20:29:17.615481 18368 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Nov 05 20:29:17 vultr k3s[18368]: I1105 20:29:17.622941 18368 reconciler.go:41] "Reconciler: start to sync state"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-21.2.1+up21.2.0.tgz"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-21.2.1+up21.2.0.tgz"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Failed to get existing traefik HelmChart" error="helmcharts.helm.cattle.io \"traefik\" not found"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Tunnel server egress proxy mode: agent"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 05 20:29:17 vultr k3s[18368]: time="2023-11-05T20:29:17Z" level=info msg="Creating deploy event broadcaster"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting /v1, Kind=Node controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Creating helm-controller event broadcaster"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Cluster dns configmap has been set successfully"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting batch/v1, Kind=Job controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting /v1, Kind=Secret controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting /v1, Kind=ConfigMap controller"
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Starting /v1, Kind=ServiceAccount controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.306411 18368 controllermanager.go:187] "Starting" version="v1.27.7+k3s1"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.306447 18368 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310442 18368 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310519 18368 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310469 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310555 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310480 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.310568 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.311547 18368 secure_serving.go:213] Serving securely on 127.0.0.1:10257
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.312133 18368 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.323150 18368 shared_informer.go:311] Waiting for caches to sync for tokens
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.330829 18368 controller.go:624] quota admission added evaluator for: serviceaccounts
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.333696 18368 controllermanager.go:638] "Started controller" controller="attachdetach"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.333799 18368 attach_detach_controller.go:343] "Starting attach detach controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.333824 18368 shared_informer.go:311] Waiting for caches to sync for attach detach
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.343294 18368 controllermanager.go:638] "Started controller" controller="clusterrole-aggregation"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.343438 18368 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.343458 18368 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.368345 18368 controllermanager.go:638] "Started controller" controller="namespace"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.368405 18368 namespace_controller.go:197] "Starting namespace controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.368428 18368 shared_informer.go:311] Waiting for caches to sync for namespace
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.380253 18368 controllermanager.go:638] "Started controller" controller="job"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.380374 18368 job_controller.go:202] Starting job controller
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.380392 18368 shared_informer.go:311] Waiting for caches to sync for job
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.390583 18368 controllermanager.go:638] "Started controller" controller="deployment"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.390739 18368 deployment_controller.go:168] "Starting controller" controller="deployment"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.390763 18368 shared_informer.go:311] Waiting for caches to sync for deployment
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.401123 18368 controllermanager.go:638] "Started controller" controller="statefulset"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.401187 18368 stateful_set.go:161] "Starting stateful set controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.401204 18368 shared_informer.go:311] Waiting for caches to sync for stateful set
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-207.148.30.37:207.148.30.37 listener.cattle.io/cn-__1-f16284:::1 listener>
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.410908 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.410937 18368 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.410952 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.412213 18368 controllermanager.go:638] "Started controller" controller="tokencleaner"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.412246 18368 controllermanager.go:603] "Warning: controller is disabled" controller="service"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.412355 18368 tokencleaner.go:112] "Starting token cleaner controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.412371 18368 shared_informer.go:311] Waiting for caches to sync for token_cleaner
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.412381 18368 shared_informer.go:318] Caches are synced for token_cleaner
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.422597 18368 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.422647 18368 controller.go:169] "Starting ephemeral volume controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.422668 18368 shared_informer.go:311] Waiting for caches to sync for ephemeral
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.423951 18368 shared_informer.go:318] Caches are synced for tokens
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.434519 18368 controllermanager.go:638] "Started controller" controller="endpointslice"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.434618 18368 endpointslice_controller.go:252] Starting endpoint slice controller
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.434636 18368 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.445590 18368 node_lifecycle_controller.go:431] "Controller will reconcile labels"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.445886 18368 controllermanager.go:638] "Started controller" controller="nodelifecycle"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.445959 18368 node_lifecycle_controller.go:465] "Sending events to api server"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.446003 18368 node_lifecycle_controller.go:476] "Starting node controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.446013 18368 shared_informer.go:311] Waiting for caches to sync for taint
Nov 05 20:29:18 vultr k3s[18368]: time="2023-11-05T20:29:18Z" level=info msg="Active TLS secret kube-system/k3s-serving (ver=232) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-207.148.30.37:207.148.30.37 listener.cattle.io/cn-__1-f16284:::1 liste>
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.578451 18368 controllermanager.go:638] "Started controller" controller="serviceaccount"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.578529 18368 serviceaccounts_controller.go:111] "Starting service account controller"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.578566 18368 shared_informer.go:311] Waiting for caches to sync for service account
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728323 18368 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728355 18368 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728385 18368 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728705 18368 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728719 18368 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728737 18368 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728914 18368 controllermanager.go:638] "Started controller" controller="csrsigning"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728984 18368 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728989 18368 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.729008 18368 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.729019 18368 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.729031 18368 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.728995 18368 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.777259 18368 controllermanager.go:638] "Started controller" controller="csrapproving"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.777311 18368 controllermanager.go:603] "Warning: controller is disabled" controller="bootstrapsigner"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.777326 18368 controllermanager.go:603] "Warning: controller is disabled" controller="route"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.777442 18368 certificate_controller.go:112] Starting certificate controller "csrapproving"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.777459 18368 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.847054 18368 event.go:307] "Event occurred" object="vultr" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="NodePasswordValidationComplete" message="Deferred node password secret validation complete"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.944110 18368 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.944186 18368 publisher.go:101] Starting root CA certificate configmap publisher
Nov 05 20:29:18 vultr k3s[18368]: I1105 20:29:18.944241 18368 shared_informer.go:311] Waiting for caches to sync for crt configmap
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.078869 18368 controllermanager.go:638] "Started controller" controller="endpointslicemirroring"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.078956 18368 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.078990 18368 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.228453 18368 controllermanager.go:638] "Started controller" controller="podgc"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.228522 18368 gc_controller.go:103] Starting GC controller
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.228534 18368 shared_informer.go:311] Waiting for caches to sync for GC
Nov 05 20:29:19 vultr k3s[18368]: time="2023-11-05T20:29:19Z" level=info msg="Labels and annotations have been set successfully on node: vultr"
Nov 05 20:29:19 vultr k3s[18368]: time="2023-11-05T20:29:19Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.379195 18368 controllermanager.go:638] "Started controller" controller="cronjob"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.379345 18368 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.379375 18368 shared_informer.go:311] Waiting for caches to sync for cronjob
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.427124 18368 controllermanager.go:638] "Started controller" controller="csrcleaner"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.427210 18368 cleaner.go:82] Starting CSR cleaner controller
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.578946 18368 controllermanager.go:638] "Started controller" controller="pv-protection"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.579020 18368 pv_protection_controller.go:78] "Starting PV protection controller"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.579038 18368 shared_informer.go:311] Waiting for caches to sync for PV protection
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.727713 18368 controllermanager.go:638] "Started controller" controller="endpoint"
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.727811 18368 endpoints_controller.go:172] Starting endpoint controller
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.727821 18368 shared_informer.go:311] Waiting for caches to sync for endpoint
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.955332 18368 controller.go:624] quota admission added evaluator for: addons.k3s.cattle.io
Nov 05 20:29:19 vultr k3s[18368]: I1105 20:29:19.961085 18368 event.go:307] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.004716 18368 event.go:307] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.024901 18368 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.036866 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.036964 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037005 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="jobs.batch"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037033 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037073 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037114 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037144 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037326 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="helmcharts.helm.cattle.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037367 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="addons.k3s.cattle.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037419 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="deployments.apps"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037444 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037471 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="endpoints"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037504 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="limitranges"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037547 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037592 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037628 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037686 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037717 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037787 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="helmchartconfigs.helm.cattle.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037875 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037917 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="podtemplates"
Nov 05 20:29:20 vultr k3s[18368]: W1105 20:29:20.037934 18368 shared_informer.go:592] resyncPeriod 13h29m38.333473399s is smaller than resyncCheckPeriod 14h33m39.738847063s and the informer has already started. Changing it to 14h33m39.738847063s
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.037993 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.038040 18368 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.038062 18368 controllermanager.go:638] "Started controller" controller="resourcequota"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.038314 18368 resource_quota_controller.go:295] "Starting resource quota controller"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.038354 18368 shared_informer.go:311] Waiting for caches to sync for resource quota
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.038383 18368 resource_quota_monitor.go:304] "QuotaMonitor running"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.081398 18368 controller.go:624] quota admission added evaluator for: deployments.apps
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.099313 18368 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.43.0.10]
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.100362 18368 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.111483 18368 event.go:307] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.165181 18368 event.go:307] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.174840 18368 event.go:307] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.177093 18368 controllermanager.go:638] "Started controller" controller="daemonset"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.177222 18368 daemon_controller.go:291] "Starting daemon sets controller"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.177240 18368 shared_informer.go:311] Waiting for caches to sync for daemon sets
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.186208 18368 event.go:307] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/a>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.195557 18368 event.go:307] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-deleg>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.204983 18368 event.go:307] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegat>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.214267 18368 event.go:307] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.y>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.221296 18368 event.go:307] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yam>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.412005 18368 event.go:307] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metric>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.423918 18368 event.go:307] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics->
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.449820 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:29:20 vultr k3s[18368]: E1105 20:29:20.450211 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.450259 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.811252 18368 event.go:307] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.829561 18368 event.go:307] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/m>
Nov 05 20:29:20 vultr k3s[18368]: time="2023-11-05T20:29:20Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=vultr --kubeconfig=/var/lib/rancher/k3s/>
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.848660 18368 server.go:226] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.857291 18368 node.go:141] Successfully retrieved node IP: 207.148.30.37
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.857336 18368 server_others.go:110] "Detected node IP" address="207.148.30.37"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.865696 18368 server_others.go:192] "Using iptables Proxier"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.865745 18368 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.865759 18368 server_others.go:200] "Creating dualStackProxier for iptables"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.865786 18368 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.865827 18368 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.866619 18368 server.go:658] "Version info" version="v1.27.7+k3s1"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.866657 18368 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867393 18368 config.go:188] "Starting service config controller"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867428 18368 shared_informer.go:311] Waiting for caches to sync for service config
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867451 18368 config.go:97] "Starting endpoint slice config controller"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867456 18368 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867912 18368 config.go:315] "Starting node config controller"
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.867928 18368 shared_informer.go:311] Waiting for caches to sync for node config
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.967790 18368 shared_informer.go:318] Caches are synced for service config
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.967794 18368 shared_informer.go:318] Caches are synced for endpoint slice config
Nov 05 20:29:20 vultr k3s[18368]: I1105 20:29:20.968141 18368 shared_informer.go:318] Caches are synced for node config
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.179982 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:21 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:21 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:21 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.180018 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.210300 18368 event.go:307] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/me>
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.220221 18368 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.43.221.43]
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.221192 18368 event.go:307] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metr>
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.226518 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: endpoints "metrics-server" not found
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.226595 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.411606 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:21 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:21 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:21 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.411639 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.411934 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:21 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:21 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:21 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.411954 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:21 vultr k3s[18368]: W1105 20:29:21.449804 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.449881 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.449897 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:29:21 vultr k3s[18368]: W1105 20:29:21.449922 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.450017 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:29:21 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.451186 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.476113 18368 serving.go:355] Generated self-signed cert in-memory
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.611309 18368 event.go:307] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource->
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.632294 18368 event.go:307] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-re>
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.878291 18368 serving.go:355] Generated self-signed cert in-memory
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.011106 18368 event.go:307] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.045231 18368 event.go:307] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.412525 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.420411 18368 controller.go:624] quota admission added evaluator for: helmcharts.helm.cattle.io
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.428723 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.451014 18368 event.go:307] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik-crd"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.451046 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.480673 18368 controller.go:624] quota admission added evaluator for: jobs.batch
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=error msg="error syncing 'kube-system/traefik': handler helm-controller-chart-registration: helmcharts.helm.cattle.io \"traefik\" not found, requeuing"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.495052 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik"
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=error msg="error syncing 'kube-system/traefik-crd': handler helm-controller-chart-registration: helmcharts.helm.cattle.io \"traefik-crd\" not found, requeuing"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.508044 18368 event.go:307] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik-crd"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.527870 18368 event.go:307] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik-crd"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.539064 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.568712 18368 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik"
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Tunnel authorizer set Kubelet Port 10250"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.762439 18368 controllermanager.go:167] Version: v1.27.7+k3s1
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767208 18368 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767247 18368 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767288 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767290 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767331 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.767302 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.768010 18368 secure_serving.go:213] Serving securely on 127.0.0.1:10258
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.768408 18368 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Nov 05 20:29:22 vultr k3s[18368]: E1105 20:29:22.782505 18368 controllermanager.go:523] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Creating service-controller event broadcaster"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.867559 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.867612 18368 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.867677 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Starting /v1, Kind=Node controller"
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Starting /v1, Kind=Pod controller"
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.942341 18368 controllermanager.go:336] Started "cloud-node-lifecycle"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.942892 18368 controllermanager.go:336] Started "service"
Nov 05 20:29:22 vultr k3s[18368]: W1105 20:29:22.942921 18368 controllermanager.go:313] "route" is disabled
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.943242 18368 controllermanager.go:336] Started "cloud-node"
Nov 05 20:29:22 vultr k3s[18368]: time="2023-11-05T20:29:22Z" level=info msg="Starting discovery.k8s.io/v1, Kind=EndpointSlice controller"
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.944214 18368 node_lifecycle_controller.go:113] Sending events to api server
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.944314 18368 controller.go:229] Starting service controller
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.944337 18368 shared_informer.go:311] Waiting for caches to sync for service
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.944373 18368 node_controller.go:161] Sending events to api server.
Nov 05 20:29:22 vultr k3s[18368]: I1105 20:29:22.944484 18368 node_controller.go:170] Waiting for informer caches to sync
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.045296 18368 shared_informer.go:318] Caches are synced for service
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.045300 18368 node_controller.go:427] Initializing node vultr with cloud provider
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.052566 18368 node_controller.go:496] Successfully initialized node vultr with cloud provider
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.052672 18368 event.go:307] "Event occurred" object="vultr" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="Updated coredns node hosts entry [207.148.30.37 vultr]"
Nov 05 20:29:23 vultr k3s[18368]: E1105 20:29:23.263801 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:23 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:23 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:23 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.263839 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:23 vultr k3s[18368]: E1105 20:29:23.264012 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:23 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:23 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:23 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.264080 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.427148 18368 serving.go:355] Generated self-signed cert in-memory
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="Connecting to proxy" url="wss://207.148.30.37:6443/v1-k3s/connect"
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Nov 05 20:29:23 vultr k3s[18368]: time="2023-11-05T20:29:23Z" level=info msg="Handling backend connection request [vultr]"
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.925029 18368 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.7+k3s1"
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.925068 18368 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929388 18368 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929676 18368 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929409 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929840 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929427 18368 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.929885 18368 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.930752 18368 secure_serving.go:213] Serving securely on 127.0.0.1:10259
Nov 05 20:29:23 vultr k3s[18368]: I1105 20:29:23.930997 18368 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Nov 05 20:29:24 vultr k3s[18368]: I1105 20:29:24.030657 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Nov 05 20:29:24 vultr k3s[18368]: I1105 20:29:24.030722 18368 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Nov 05 20:29:24 vultr k3s[18368]: I1105 20:29:24.030689 18368 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.242011 18368 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.242087 18368 controllermanager.go:638] "Started controller" controller="nodeipam"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.242107 18368 controllermanager.go:603] "Warning: controller is disabled" controller="cloud-node-lifecycle"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.242284 18368 node_ipam_controller.go:162] "Starting ipam controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.242307 18368 shared_informer.go:311] Waiting for caches to sync for node
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.252400 18368 controllermanager.go:638] "Started controller" controller="ttl-after-finished"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.252560 18368 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.252583 18368 shared_informer.go:311] Waiting for caches to sync for TTL after finished
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.262843 18368 controllermanager.go:638] "Started controller" controller="persistentvolume-binder"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.262969 18368 pv_controller_base.go:323] "Starting persistent volume controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.262992 18368 shared_informer.go:311] Waiting for caches to sync for persistent volume
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.272469 18368 controllermanager.go:638] "Started controller" controller="pvc-protection"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.272539 18368 pvc_protection_controller.go:102] "Starting PVC protection controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.272558 18368 shared_informer.go:311] Waiting for caches to sync for PVC protection
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.281361 18368 controllermanager.go:638] "Started controller" controller="persistentvolume-expander"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.281443 18368 expand_controller.go:339] "Starting expand controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.281460 18368 shared_informer.go:311] Waiting for caches to sync for expand
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.290936 18368 controllermanager.go:638] "Started controller" controller="replicationcontroller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.291063 18368 replica_set.go:201] "Starting controller" name="replicationcontroller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.291085 18368 shared_informer.go:311] Waiting for caches to sync for ReplicationController
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.309045 18368 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.309080 18368 shared_informer.go:311] Waiting for caches to sync for garbage collector
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.309106 18368 graph_builder.go:294] "Running" component="GraphBuilder"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.309187 18368 controllermanager.go:638] "Started controller" controller="garbagecollector"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.321028 18368 controllermanager.go:638] "Started controller" controller="replicaset"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.321158 18368 replica_set.go:201] "Starting controller" name="replicaset"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.321175 18368 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
Nov 05 20:29:30 vultr k3s[18368]: W1105 20:29:30.328073 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.383219 18368 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.383307 18368 horizontal.go:200] "Starting HPA controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.383319 18368 shared_informer.go:311] Waiting for caches to sync for HPA
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.582303 18368 controllermanager.go:638] "Started controller" controller="disruption"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.582337 18368 disruption.go:423] Sending events to api server.
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.582403 18368 disruption.go:434] Starting disruption controller
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.582416 18368 shared_informer.go:311] Waiting for caches to sync for disruption
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.733654 18368 controllermanager.go:638] "Started controller" controller="ttl"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.733766 18368 ttl_controller.go:124] "Starting TTL controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.733786 18368 shared_informer.go:311] Waiting for caches to sync for TTL
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.739696 18368 shared_informer.go:311] Waiting for caches to sync for resource quota
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.753027 18368 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"vultr\" does not exist"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.756652 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.756711 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.765933 18368 shared_informer.go:311] Waiting for caches to sync for garbage collector
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.768582 18368 shared_informer.go:318] Caches are synced for namespace
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.772610 18368 shared_informer.go:318] Caches are synced for PVC protection
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.777336 18368 shared_informer.go:318] Caches are synced for daemon sets
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.777513 18368 shared_informer.go:318] Caches are synced for certificate-csrapproving
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.778630 18368 shared_informer.go:318] Caches are synced for service account
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.779090 18368 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.780294 18368 shared_informer.go:318] Caches are synced for cronjob
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.780423 18368 shared_informer.go:318] Caches are synced for job
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.784167 18368 shared_informer.go:318] Caches are synced for HPA
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.791136 18368 shared_informer.go:318] Caches are synced for ReplicationController
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.802279 18368 shared_informer.go:318] Caches are synced for stateful set
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.823046 18368 shared_informer.go:318] Caches are synced for ephemeral
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.828550 18368 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.828679 18368 shared_informer.go:318] Caches are synced for GC
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.828762 18368 shared_informer.go:318] Caches are synced for endpoint
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.828781 18368 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.829035 18368 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.829143 18368 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.834141 18368 shared_informer.go:318] Caches are synced for TTL
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.835111 18368 shared_informer.go:318] Caches are synced for endpoint_slice
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.842376 18368 shared_informer.go:318] Caches are synced for node
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.842438 18368 range_allocator.go:174] "Sending events to api server"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.842468 18368 range_allocator.go:178] "Starting range CIDR allocator"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.842477 18368 shared_informer.go:311] Waiting for caches to sync for cidrallocator
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.842485 18368 shared_informer.go:318] Caches are synced for cidrallocator
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.843515 18368 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.844634 18368 shared_informer.go:318] Caches are synced for crt configmap
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846253 18368 shared_informer.go:318] Caches are synced for taint
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846362 18368 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846463 18368 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="vultr"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846515 18368 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846582 18368 taint_manager.go:206] "Starting NoExecuteTaintManager"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846618 18368 taint_manager.go:211] "Sending events to api server"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.846744 18368 event.go:307] "Event occurred" object="vultr" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node vultr event: Registered Node vultr in Controller"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.848835 18368 range_allocator.go:380] "Set node PodCIDR" node="vultr" podCIDRs=[10.42.0.0/24]
Nov 05 20:29:30 vultr k3s[18368]: time="2023-11-05T20:29:30Z" level=info msg="Flannel found PodCIDR assigned for node vultr"
Nov 05 20:29:30 vultr k3s[18368]: time="2023-11-05T20:29:30Z" level=info msg="The interface enp1s0 with ipv4 address 207.148.30.37 will be used by flannel"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.853096 18368 shared_informer.go:318] Caches are synced for TTL after finished
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.853653 18368 kube.go:145] Waiting 10m0s for node controller to sync
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.853687 18368 kube.go:489] Starting kube subnet manager
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.934747 18368 shared_informer.go:318] Caches are synced for attach detach
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.963290 18368 shared_informer.go:318] Caches are synced for persistent volume
Nov 05 20:29:30 vultr k3s[18368]: time="2023-11-05T20:29:30Z" level=info msg="Starting the netpol controller version v2.0.0-20230925161250-364f994b140b, built on 2023-10-30T20:06:50Z, go1.20.10"
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.972282 18368 network_policy_controller.go:164] Starting network policy controller
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.979551 18368 shared_informer.go:318] Caches are synced for PV protection
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.981772 18368 shared_informer.go:318] Caches are synced for expand
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.983287 18368 shared_informer.go:318] Caches are synced for disruption
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.991779 18368 shared_informer.go:318] Caches are synced for deployment
Nov 05 20:29:30 vultr k3s[18368]: I1105 20:29:30.999817 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.004571 18368 event.go:307] "Event occurred" object="kube-system/helm-install-traefik" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-g8wxf"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.027244 18368 event.go:307] "Event occurred" object="kube-system/helm-install-traefik-crd" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-crd-9jzpf"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.014491 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.025897 18368 shared_informer.go:318] Caches are synced for ReplicaSet
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.031586 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.038100 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.041770 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.044513 18368 shared_informer.go:318] Caches are synced for resource quota
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.044534 18368 shared_informer.go:318] Caches are synced for resource quota
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.049207 18368 topology_manager.go:212] "Topology Admit Handler" podUID=366a3583-f186-4b57-a79e-567519874344 podNamespace="kube-system" podName="helm-install-traefik-g8wxf"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.050768 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.074750 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.076050 18368 network_policy_controller.go:176] Starting network policy controller full sync goroutine
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.111915 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zmtv\" (UniqueName: \"kubernetes.io/projected/366a3583-f186-4b57-a79e-567519874344-kube-api-access-2zmtv\") pod \"helm-install-traefik-g8wxf\" (>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.111999 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/secret/366a3583-f186-4b57-a79e-567519874344-values\") pod \"helm-install-traefik-g8wxf\" (UID: \"366a3583-f186-4b57-a79e-56>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.112038 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/366a3583-f186-4b57-a79e-567519874344-content\") pod \"helm-install-traefik-g8wxf\" (UID: \"366a3583-f186-4b57-a7>
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.222431 18368 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.222479 18368 projected.go:198] Error preparing data for projected volume kube-api-access-2zmtv for pod kube-system/helm-install-traefik-g8wxf: configmap "kube-root-ca.crt" not found
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.222702 18368 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/366a3583-f186-4b57-a79e-567519874344-kube-api-access-2zmtv podName:366a3583-f186-4b57-a79e-567519874344 nodeName:}" failed. No retries permitted until 2023-11-05 20:29:31.7226>
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.245638 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.245684 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.366679 18368 shared_informer.go:318] Caches are synced for garbage collector
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.409224 18368 shared_informer.go:318] Caches are synced for garbage collector
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.409262 18368 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.538872 18368 controller.go:624] quota admission added evaluator for: replicasets.apps
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.542891 18368 event.go:307] "Event occurred" object="kube-system/local-path-provisioner" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-957fdf8bc to 1"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.544390 18368 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-77ccd57875 to 1"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.548230 18368 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-648b5df564 to 1"
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.592047 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:31 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:31 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:31 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.592090 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.643394 18368 event.go:307] "Event occurred" object="kube-system/coredns-77ccd57875" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-77ccd57875-f8g77"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.643469 18368 event.go:307] "Event occurred" object="kube-system/metrics-server-648b5df564" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-648b5df564-89n86"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.652813 18368 topology_manager.go:212] "Topology Admit Handler" podUID=ce08fd69-900c-41ea-b83e-72568242f947 podNamespace="kube-system" podName="coredns-77ccd57875-f8g77"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.663793 18368 event.go:307] "Event occurred" object="kube-system/local-path-provisioner-957fdf8bc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-957fdf8bc-tbl99"
Nov 05 20:29:31 vultr k3s[18368]: W1105 20:29:31.664425 18368 reflector.go:533] object-"kube-system"/"coredns-custom": failed to list *v1.ConfigMap: configmaps "coredns-custom" is forbidden: User "system:node:vultr" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found >
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.664616 18368 reflector.go:148] object-"kube-system"/"coredns-custom": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns-custom" is forbidden: User "system:node:vultr" cannot list resource "configmaps" in API group "" in the namespace "kube->
Nov 05 20:29:31 vultr k3s[18368]: W1105 20:29:31.664753 18368 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:vultr" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '>
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.664857 18368 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:vultr" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no re>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.687208 18368 topology_manager.go:212] "Topology Admit Handler" podUID=9cbbd4f9-beca-40a4-bc51-0378307364f0 podNamespace="kube-system" podName="local-path-provisioner-957fdf8bc-tbl99"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.689995 18368 topology_manager.go:212] "Topology Admit Handler" podUID=aea9ccbd-3d9f-4cd9-adad-a56a2cb729ec podNamespace="kube-system" podName="metrics-server-648b5df564-89n86"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719047 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce08fd69-900c-41ea-b83e-72568242f947-config-volume\") pod \"coredns-77ccd57875-f8g77\" (UID: \"ce08fd69-90>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719135 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kj6n\" (UniqueName: \"kubernetes.io/projected/ce08fd69-900c-41ea-b83e-72568242f947-kube-api-access-9kj6n\") pod \"coredns-77ccd57875-f8g77\" (UI>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719206 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cbbd4f9-beca-40a4-bc51-0378307364f0-config-volume\") pod \"local-path-provisioner-957fdf8bc-tbl99\" (UID:>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719248 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj45z\" (UniqueName: \"kubernetes.io/projected/9cbbd4f9-beca-40a4-bc51-0378307364f0-kube-api-access-vj45z\") pod \"local-path-provisioner-957fdf8>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719294 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/aea9ccbd-3d9f-4cd9-adad-a56a2cb729ec-kube-api-access-jwn6m\") pod \"metrics-server-648b5df564-89n8>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719321 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/ce08fd69-900c-41ea-b83e-72568242f947-custom-config-volume\") pod \"coredns-77ccd57875-f8g77\" (UID:>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.719363 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aea9ccbd-3d9f-4cd9-adad-a56a2cb729ec-tmp-dir\") pod \"metrics-server-648b5df564-89n86\" (UID: \"aea9ccbd-3d9f-4c>
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.854041 18368 kube.go:152] Node controller sync successful
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.854451 18368 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Nov 05 20:29:31 vultr k3s[18368]: E1105 20:29:31.868053 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:31 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:31 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:31 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.868381 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.887343 18368 kube.go:510] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.899812 18368 iptables.go:290] generated 3 rules
Nov 05 20:29:31 vultr k3s[18368]: time="2023-11-05T20:29:31Z" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
Nov 05 20:29:31 vultr k3s[18368]: time="2023-11-05T20:29:31Z" level=info msg="Running flannel backend."
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.900240 18368 vxlan_network.go:65] watching for new subnet leases
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.902860 18368 iptables.go:290] generated 7 rules
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.926581 18368 iptables.go:283] bootstrap done
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.943864 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:29:31 vultr k3s[18368]: I1105 20:29:31.963939 18368 iptables.go:283] bootstrap done
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.043866 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.044028 18368 topology_manager.go:212] "Topology Admit Handler" podUID=0a20c500-d476-409b-85cc-a36c211bbf61 podNamespace="kube-system" podName="helm-install-traefik-crd-9jzpf"
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.068177 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.120834 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/0a20c500-d476-409b-85cc-a36c211bbf61-content\") pod \"helm-install-traefik-crd-9jzpf\" (UID: \"0a20c500-d476-409>
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.120907 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxg9\" (UniqueName: \"kubernetes.io/projected/0a20c500-d476-409b-85cc-a36c211bbf61-kube-api-access-lpxg9\") pod \"helm-install-traefik-crd-9jzpf>
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.121023 18368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/secret/0a20c500-d476-409b-85cc-a36c211bbf61-values\") pod \"helm-install-traefik-crd-9jzpf\" (UID: \"0a20c500-d476-409b-85c>
Nov 05 20:29:32 vultr k3s[18368]: W1105 20:29:32.246453 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:29:32 vultr k3s[18368]: W1105 20:29:32.246494 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:29:32 vultr k3s[18368]: E1105 20:29:32.246520 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.246535 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:29:32 vultr k3s[18368]: E1105 20:29:32.246571 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:29:32 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.247696 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:29:32 vultr k3s[18368]: E1105 20:29:32.673378 18368 iptables.go:307] Failed to bootstrap IPTables: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:29:32 vultr k3s[18368]: I1105 20:29:32.681773 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:29:32 vultr k3s[18368]: E1105 20:29:32.822137 18368 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Nov 05 20:29:32 vultr k3s[18368]: E1105 20:29:32.822324 18368 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce08fd69-900c-41ea-b83e-72568242f947-config-volume podName:ce08fd69-900c-41ea-b83e-72568242f947 nodeName:}" failed. No retries permitted until 2023-11-05 20:29:33.322291313 +0>
Nov 05 20:29:33 vultr k3s[18368]: E1105 20:29:33.492156 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:29:36 vultr k3s[18368]: I1105 20:29:36.828220 18368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/local-path-provisioner-957fdf8bc-tbl99" podStartSLOduration=3.387696009 podCreationTimestamp="2023-11-05 20:29:31 +0000 UTC" firstStartedPulling="2023-11-05 20:29:33.0205653>
Nov 05 20:29:36 vultr k3s[18368]: I1105 20:29:36.848540 18368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-77ccd57875-f8g77" podStartSLOduration=3.639811148 podCreationTimestamp="2023-11-05 20:29:31 +0000 UTC" firstStartedPulling="2023-11-05 20:29:33.763870719 +0000 UTC m>
Nov 05 20:29:36 vultr k3s[18368]: I1105 20:29:36.848724 18368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/metrics-server-648b5df564-89n86" podStartSLOduration=2.669381805 podCreationTimestamp="2023-11-05 20:29:31 +0000 UTC" firstStartedPulling="2023-11-05 20:29:33.011841471 +000>
Nov 05 20:29:37 vultr k3s[18368]: E1105 20:29:37.057061 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:37 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:37 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:37 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:37 vultr k3s[18368]: I1105 20:29:37.057099 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:37 vultr k3s[18368]: E1105 20:29:37.248212 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:37 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:37 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:37 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:37 vultr k3s[18368]: I1105 20:29:37.248263 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:29:37 vultr k3s[18368]: I1105 20:29:37.320721 18368 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
Nov 05 20:29:37 vultr k3s[18368]: I1105 20:29:37.321700 18368 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
Nov 05 20:29:42 vultr k3s[18368]: I1105 20:29:42.806831 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:42 vultr k3s[18368]: I1105 20:29:42.823003 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:42 vultr k3s[18368]: I1105 20:29:42.823964 18368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/helm-install-traefik-g8wxf" podStartSLOduration=3.712107131 podCreationTimestamp="2023-11-05 20:29:30 +0000 UTC" firstStartedPulling="2023-11-05 20:29:33.008528629 +0000 UTC>
Nov 05 20:29:42 vultr k3s[18368]: I1105 20:29:42.824307 18368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/helm-install-traefik-crd-9jzpf" podStartSLOduration=3.696264747 podCreationTimestamp="2023-11-05 20:29:30 +0000 UTC" firstStartedPulling="2023-11-05 20:29:32.991343633 +0000>
Nov 05 20:29:43 vultr k3s[18368]: I1105 20:29:43.816740 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:29:43 vultr k3s[18368]: I1105 20:29:43.831199 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:29:53 vultr k3s[18368]: E1105 20:29:53.452139 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:53 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:53 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:53 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:53 vultr k3s[18368]: I1105 20:29:53.452180 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:30:01 vultr k3s[18368]: E1105 20:30:01.055092 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:30:01 vultr k3s[18368]: W1105 20:30:01.377791 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:30:06 vultr k3s[18368]: I1105 20:30:06.849289 18368 scope.go:115] "RemoveContainer" containerID="279ecda3f3a7e04af05cc141cffbe64db22acba3ff46c18142c869e070022cf9"
Nov 05 20:30:07 vultr k3s[18368]: E1105 20:30:07.240091 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:30:07 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:30:07 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:30:07 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:30:07 vultr k3s[18368]: I1105 20:30:07.240137 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:30:08 vultr k3s[18368]: I1105 20:30:08.858607 18368 scope.go:115] "RemoveContainer" containerID="48bdd65f0edd1c1282613fe37b297edd0220e5abd741c8f56a1e8509acef5cc9"
Nov 05 20:30:14 vultr k3s[18368]: E1105 20:30:14.570506 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:30:14 vultr k3s[18368]: I1105 20:30:14.570646 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:30:23 vultr k3s[18368]: E1105 20:30:23.644204 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:30:23 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:30:23 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:30:23 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:30:23 vultr k3s[18368]: I1105 20:30:23.644261 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:30:31 vultr k3s[18368]: E1105 20:30:31.062208 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:30:31 vultr k3s[18368]: W1105 20:30:31.390213 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:30:32 vultr k3s[18368]: W1105 20:30:32.247702 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:30:32 vultr k3s[18368]: W1105 20:30:32.247786 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:30:32 vultr k3s[18368]: E1105 20:30:32.247865 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:30:32 vultr k3s[18368]: I1105 20:30:32.247921 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:30:32 vultr k3s[18368]: E1105 20:30:32.247870 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:30:32 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:30:32 vultr k3s[18368]: I1105 20:30:32.249940 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:30:33 vultr k3s[18368]: I1105 20:30:33.500100 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:30:34 vultr k3s[18368]: E1105 20:30:34.192302 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:30:37 vultr k3s[18368]: E1105 20:30:37.444205 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:30:37 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:30:37 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:30:37 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:30:37 vultr k3s[18368]: I1105 20:30:37.444258 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:30:37 vultr k3s[18368]: I1105 20:30:37.938541 18368 scope.go:115] "RemoveContainer" containerID="279ecda3f3a7e04af05cc141cffbe64db22acba3ff46c18142c869e070022cf9"
Nov 05 20:30:37 vultr k3s[18368]: I1105 20:30:37.938915 18368 scope.go:115] "RemoveContainer" containerID="c283b301259cf077e79d79bc6cfec5491222899040809f8f12d90427eddf9121"
Nov 05 20:30:37 vultr k3s[18368]: E1105 20:30:37.941712 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:30:39 vultr k3s[18368]: I1105 20:30:39.946100 18368 scope.go:115] "RemoveContainer" containerID="48bdd65f0edd1c1282613fe37b297edd0220e5abd741c8f56a1e8509acef5cc9"
Nov 05 20:30:39 vultr k3s[18368]: I1105 20:30:39.946425 18368 scope.go:115] "RemoveContainer" containerID="30c7d69a234775708c8f688404425c09cd3e1a16b971f69dbed7e6c8ac10e82d"
Nov 05 20:30:39 vultr k3s[18368]: E1105 20:30:39.947057 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:30:42 vultr k3s[18368]: I1105 20:30:42.100512 18368 scope.go:115] "RemoveContainer" containerID="30c7d69a234775708c8f688404425c09cd3e1a16b971f69dbed7e6c8ac10e82d"
Nov 05 20:30:42 vultr k3s[18368]: E1105 20:30:42.101048 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:30:48 vultr k3s[18368]: I1105 20:30:48.694400 18368 scope.go:115] "RemoveContainer" containerID="c283b301259cf077e79d79bc6cfec5491222899040809f8f12d90427eddf9121"
Nov 05 20:30:53 vultr k3s[18368]: E1105 20:30:53.816041 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:30:53 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:30:53 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:30:53 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:30:53 vultr k3s[18368]: I1105 20:30:53.816094 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:30:57 vultr k3s[18368]: I1105 20:30:57.694544 18368 scope.go:115] "RemoveContainer" containerID="30c7d69a234775708c8f688404425c09cd3e1a16b971f69dbed7e6c8ac10e82d"
Nov 05 20:31:01 vultr k3s[18368]: E1105 20:31:01.069011 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:31:01 vultr k3s[18368]: W1105 20:31:01.403778 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:31:07 vultr k3s[18368]: E1105 20:31:07.640208 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:31:07 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:31:07 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:31:07 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:31:07 vultr k3s[18368]: I1105 20:31:07.640259 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:31:14 vultr k3s[18368]: E1105 20:31:14.571129 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:31:14 vultr k3s[18368]: I1105 20:31:14.571233 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:31:19 vultr k3s[18368]: I1105 20:31:19.043003 18368 scope.go:115] "RemoveContainer" containerID="c283b301259cf077e79d79bc6cfec5491222899040809f8f12d90427eddf9121"
Nov 05 20:31:19 vultr k3s[18368]: I1105 20:31:19.043471 18368 scope.go:115] "RemoveContainer" containerID="5a3d24f28a6cc636bc88f4100618e42f1ab326c5d01749f43526a8b5c3a4b746"
Nov 05 20:31:19 vultr k3s[18368]: E1105 20:31:19.043915 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:31:24 vultr k3s[18368]: E1105 20:31:24.000080 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:31:24 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:31:24 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:31:24 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:31:24 vultr k3s[18368]: I1105 20:31:24.000124 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:31:29 vultr k3s[18368]: I1105 20:31:29.071205 18368 scope.go:115] "RemoveContainer" containerID="30c7d69a234775708c8f688404425c09cd3e1a16b971f69dbed7e6c8ac10e82d"
Nov 05 20:31:29 vultr k3s[18368]: I1105 20:31:29.071694 18368 scope.go:115] "RemoveContainer" containerID="9b3413a72f8bb2f0ac4b603656bde414302536b87249d4dc057989baf98f93f7"
Nov 05 20:31:29 vultr k3s[18368]: E1105 20:31:29.072229 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:31:30 vultr k3s[18368]: I1105 20:31:30.694074 18368 scope.go:115] "RemoveContainer" containerID="5a3d24f28a6cc636bc88f4100618e42f1ab326c5d01749f43526a8b5c3a4b746"
Nov 05 20:31:30 vultr k3s[18368]: E1105 20:31:30.694440 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:31:31 vultr k3s[18368]: E1105 20:31:31.075551 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:31:31 vultr k3s[18368]: W1105 20:31:31.415159 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:31:32 vultr k3s[18368]: I1105 20:31:32.101232 18368 scope.go:115] "RemoveContainer" containerID="9b3413a72f8bb2f0ac4b603656bde414302536b87249d4dc057989baf98f93f7"
Nov 05 20:31:32 vultr k3s[18368]: E1105 20:31:32.101803 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:31:34 vultr k3s[18368]: I1105 20:31:34.200350 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:31:34 vultr k3s[18368]: E1105 20:31:34.804116 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:31:37 vultr k3s[18368]: E1105 20:31:37.820166 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:31:37 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:31:37 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:31:37 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:31:37 vultr k3s[18368]: I1105 20:31:37.820220 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:31:44 vultr k3s[18368]: I1105 20:31:44.694300 18368 scope.go:115] "RemoveContainer" containerID="9b3413a72f8bb2f0ac4b603656bde414302536b87249d4dc057989baf98f93f7"
Nov 05 20:31:44 vultr k3s[18368]: E1105 20:31:44.694781 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:31:45 vultr k3s[18368]: I1105 20:31:45.694965 18368 scope.go:115] "RemoveContainer" containerID="5a3d24f28a6cc636bc88f4100618e42f1ab326c5d01749f43526a8b5c3a4b746"
Nov 05 20:31:54 vultr k3s[18368]: E1105 20:31:54.172072 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:31:54 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:31:54 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:31:54 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:31:54 vultr k3s[18368]: I1105 20:31:54.172117 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:31:59 vultr k3s[18368]: I1105 20:31:59.694973 18368 scope.go:115] "RemoveContainer" containerID="9b3413a72f8bb2f0ac4b603656bde414302536b87249d4dc057989baf98f93f7"
Nov 05 20:32:01 vultr k3s[18368]: E1105 20:32:01.081499 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:32:01 vultr k3s[18368]: W1105 20:32:01.424841 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:32:08 vultr k3s[18368]: E1105 20:32:08.020182 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:32:08 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:32:08 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:32:08 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:32:08 vultr k3s[18368]: I1105 20:32:08.020231 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:32:14 vultr k3s[18368]: E1105 20:32:14.571138 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:32:14 vultr k3s[18368]: I1105 20:32:14.571231 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:32:16 vultr k3s[18368]: I1105 20:32:16.179610 18368 scope.go:115] "RemoveContainer" containerID="5a3d24f28a6cc636bc88f4100618e42f1ab326c5d01749f43526a8b5c3a4b746"
Nov 05 20:32:16 vultr k3s[18368]: I1105 20:32:16.179996 18368 scope.go:115] "RemoveContainer" containerID="ea1747b94c3089fe41c3be590fdcaa461fe7f5528370bc7940c3e33c0a108504"
Nov 05 20:32:16 vultr k3s[18368]: E1105 20:32:16.180381 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:32:24 vultr k3s[18368]: E1105 20:32:24.368217 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:32:24 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:32:24 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:32:24 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:32:24 vultr k3s[18368]: I1105 20:32:24.368268 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:32:30 vultr k3s[18368]: I1105 20:32:30.694170 18368 scope.go:115] "RemoveContainer" containerID="ea1747b94c3089fe41c3be590fdcaa461fe7f5528370bc7940c3e33c0a108504"
Nov 05 20:32:30 vultr k3s[18368]: E1105 20:32:30.694822 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:32:31 vultr k3s[18368]: E1105 20:32:31.088369 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:32:31 vultr k3s[18368]: I1105 20:32:31.213939 18368 scope.go:115] "RemoveContainer" containerID="9b3413a72f8bb2f0ac4b603656bde414302536b87249d4dc057989baf98f93f7"
Nov 05 20:32:31 vultr k3s[18368]: I1105 20:32:31.214294 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:32:31 vultr k3s[18368]: E1105 20:32:31.214799 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:32:31 vultr k3s[18368]: W1105 20:32:31.434709 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:32:32 vultr k3s[18368]: I1105 20:32:32.216910 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:32:32 vultr k3s[18368]: E1105 20:32:32.217422 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:32:32 vultr k3s[18368]: W1105 20:32:32.249038 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:32:32 vultr k3s[18368]: E1105 20:32:32.249121 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:32:32 vultr k3s[18368]: I1105 20:32:32.249135 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:32:32 vultr k3s[18368]: W1105 20:32:32.250206 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:32:32 vultr k3s[18368]: E1105 20:32:32.250297 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:32:32 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:32:32 vultr k3s[18368]: I1105 20:32:32.250311 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:32:34 vultr k3s[18368]: I1105 20:32:34.811725 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:32:35 vultr k3s[18368]: E1105 20:32:35.428124 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:32:38 vultr k3s[18368]: E1105 20:32:38.212193 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:32:38 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:32:38 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:32:38 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:32:38 vultr k3s[18368]: I1105 20:32:38.212242 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:32:45 vultr k3s[18368]: I1105 20:32:45.695137 18368 scope.go:115] "RemoveContainer" containerID="ea1747b94c3089fe41c3be590fdcaa461fe7f5528370bc7940c3e33c0a108504"
Nov 05 20:32:45 vultr k3s[18368]: E1105 20:32:45.695581 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syste>
Nov 05 20:32:46 vultr k3s[18368]: I1105 20:32:46.694393 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:32:46 vultr k3s[18368]: E1105 20:32:46.695027 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:32:54 vultr k3s[18368]: I1105 20:32:54.262649 18368 scope.go:115] "RemoveContainer" containerID="1ee9dedd56c2e527be1520354068b187793d292497bda9b1b1f38b55818fc232"
Nov 05 20:32:54 vultr k3s[18368]: I1105 20:32:54.266949 18368 scope.go:115] "RemoveContainer" containerID="1bfeb7abc9b96e5bf87beaa7fa28fbe17af3be196aee7276a997ee8d30cc9daf"
Nov 05 20:32:54 vultr k3s[18368]: I1105 20:32:54.287071 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:32:54 vultr k3s[18368]: I1105 20:32:54.304932 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:32:54 vultr k3s[18368]: E1105 20:32:54.583891 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:32:54 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:32:54 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:32:54 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:32:54 vultr k3s[18368]: I1105 20:32:54.583937 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:32:55 vultr k3s[18368]: I1105 20:32:55.287778 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:32:55 vultr k3s[18368]: I1105 20:32:55.302905 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:32:58 vultr k3s[18368]: I1105 20:32:58.695353 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:32:58 vultr k3s[18368]: E1105 20:32:58.696122 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-ad>
Nov 05 20:33:00 vultr k3s[18368]: I1105 20:33:00.694595 18368 scope.go:115] "RemoveContainer" containerID="ea1747b94c3089fe41c3be590fdcaa461fe7f5528370bc7940c3e33c0a108504"
Nov 05 20:33:01 vultr k3s[18368]: E1105 20:33:01.095362 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:33:01 vultr k3s[18368]: W1105 20:33:01.445032 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:33:08 vultr k3s[18368]: E1105 20:33:08.416202 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:33:08 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:33:08 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:33:08 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:33:08 vultr k3s[18368]: I1105 20:33:08.416251 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:33:13 vultr k3s[18368]: I1105 20:33:13.694318 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:33:14 vultr k3s[18368]: E1105 20:33:14.570867 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:33:14 vultr k3s[18368]: I1105 20:33:14.570982 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:33:24 vultr k3s[18368]: E1105 20:33:24.812115 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:33:24 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:33:24 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:33:24 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:33:24 vultr k3s[18368]: I1105 20:33:24.812167 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:33:31 vultr k3s[18368]: E1105 20:33:31.101701 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:33:31 vultr k3s[18368]: I1105 20:33:31.364095 18368 scope.go:115] "RemoveContainer" containerID="ea1747b94c3089fe41c3be590fdcaa461fe7f5528370bc7940c3e33c0a108504"
Nov 05 20:33:31 vultr k3s[18368]: I1105 20:33:31.364439 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:33:31 vultr k3s[18368]: E1105 20:33:31.364809 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:33:31 vultr k3s[18368]: W1105 20:33:31.456685 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:33:35 vultr k3s[18368]: I1105 20:33:35.435609 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:33:36 vultr k3s[18368]: E1105 20:33:36.064222 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:33:38 vultr k3s[18368]: E1105 20:33:38.604239 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:33:38 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:33:38 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:33:38 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:33:38 vultr k3s[18368]: I1105 20:33:38.604286 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:33:44 vultr k3s[18368]: I1105 20:33:44.398608 18368 scope.go:115] "RemoveContainer" containerID="a74ee4f636d80df0e5973ab63bfe249f1d6a016a959f8187e794365778e10981"
Nov 05 20:33:44 vultr k3s[18368]: I1105 20:33:44.399023 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:33:44 vultr k3s[18368]: E1105 20:33:44.399715 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:33:45 vultr k3s[18368]: I1105 20:33:45.694927 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:33:45 vultr k3s[18368]: E1105 20:33:45.695281 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:33:52 vultr k3s[18368]: I1105 20:33:52.100337 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:33:52 vultr k3s[18368]: E1105 20:33:52.100883 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:33:55 vultr k3s[18368]: E1105 20:33:55.008294 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:33:55 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:33:55 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:33:55 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:33:55 vultr k3s[18368]: I1105 20:33:55.008352 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:33:57 vultr k3s[18368]: I1105 20:33:57.694327 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:33:57 vultr k3s[18368]: E1105 20:33:57.694688 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:34:01 vultr k3s[18368]: E1105 20:34:01.107152 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:34:01 vultr k3s[18368]: W1105 20:34:01.466331 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:34:03 vultr k3s[18368]: I1105 20:34:03.694427 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:34:03 vultr k3s[18368]: E1105 20:34:03.695114 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:34:08 vultr k3s[18368]: E1105 20:34:08.771946 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:34:08 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:34:08 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:34:08 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:34:08 vultr k3s[18368]: I1105 20:34:08.771995 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:34:10 vultr k3s[18368]: I1105 20:34:10.694676 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:34:10 vultr k3s[18368]: E1105 20:34:10.694998 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:34:12 vultr k3s[18368]: time="2023-11-05T20:34:12Z" level=info msg="COMPACT revision 0 has already been compacted"
Nov 05 20:34:14 vultr k3s[18368]: E1105 20:34:14.571188 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:34:14 vultr k3s[18368]: I1105 20:34:14.571260 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:34:14 vultr k3s[18368]: I1105 20:34:14.623988 18368 handler.go:232] Adding GroupVersion helm.cattle.io v1 to ResourceManager
Nov 05 20:34:14 vultr k3s[18368]: I1105 20:34:14.624249 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:34:14 vultr k3s[18368]: E1105 20:34:14.630131 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:34:14 vultr k3s[18368]: I1105 20:34:14.630195 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:34:15 vultr k3s[18368]: W1105 20:34:15.630667 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:34:15 vultr k3s[18368]: W1105 20:34:15.630695 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:34:15 vultr k3s[18368]: E1105 20:34:15.630830 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:34:15 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:34:15 vultr k3s[18368]: I1105 20:34:15.630854 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:34:15 vultr k3s[18368]: E1105 20:34:15.630832 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:34:15 vultr k3s[18368]: I1105 20:34:15.632913 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:34:17 vultr k3s[18368]: I1105 20:34:17.694869 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:34:17 vultr k3s[18368]: E1105 20:34:17.695357 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:34:22 vultr k3s[18368]: I1105 20:34:22.694540 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:34:22 vultr k3s[18368]: E1105 20:34:22.694957 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:34:25 vultr k3s[18368]: E1105 20:34:25.196289 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:34:25 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:34:25 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:34:25 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:34:25 vultr k3s[18368]: I1105 20:34:25.196327 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:34:31 vultr k3s[18368]: E1105 20:34:31.114864 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:34:31 vultr k3s[18368]: W1105 20:34:31.477337 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:34:32 vultr k3s[18368]: I1105 20:34:32.695237 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:34:32 vultr k3s[18368]: E1105 20:34:32.695878 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:34:35 vultr k3s[18368]: I1105 20:34:35.694462 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:34:35 vultr k3s[18368]: E1105 20:34:35.694881 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:34:36 vultr k3s[18368]: I1105 20:34:36.073788 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:34:36 vultr k3s[18368]: E1105 20:34:36.748352 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:34:38 vultr k3s[18368]: E1105 20:34:38.981025 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:34:38 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:34:38 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:34:38 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:34:38 vultr k3s[18368]: I1105 20:34:38.981147 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:34:44 vultr k3s[18368]: I1105 20:34:44.694700 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:34:44 vultr k3s[18368]: E1105 20:34:44.695318 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:34:50 vultr k3s[18368]: I1105 20:34:50.694652 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:34:50 vultr k3s[18368]: E1105 20:34:50.695037 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:34:55 vultr k3s[18368]: E1105 20:34:55.388198 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:34:55 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:34:55 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:34:55 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:34:55 vultr k3s[18368]: I1105 20:34:55.388247 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:34:56 vultr k3s[18368]: I1105 20:34:56.694185 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:34:56 vultr k3s[18368]: E1105 20:34:56.694787 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:35:01 vultr k3s[18368]: E1105 20:35:01.121047 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:35:01 vultr k3s[18368]: W1105 20:35:01.488998 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:35:03 vultr k3s[18368]: I1105 20:35:03.694294 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:35:07 vultr k3s[18368]: I1105 20:35:07.694055 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:35:09 vultr k3s[18368]: E1105 20:35:09.224073 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:35:09 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:35:09 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:35:09 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:35:09 vultr k3s[18368]: I1105 20:35:09.224119 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:35:14 vultr k3s[18368]: E1105 20:35:14.571168 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:35:14 vultr k3s[18368]: I1105 20:35:14.571259 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:35:15 vultr k3s[18368]: W1105 20:35:15.631816 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:35:15 vultr k3s[18368]: E1105 20:35:15.631918 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:35:15 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:35:15 vultr k3s[18368]: I1105 20:35:15.631938 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:35:15 vultr k3s[18368]: W1105 20:35:15.634057 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:35:15 vultr k3s[18368]: E1105 20:35:15.634138 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:35:15 vultr k3s[18368]: I1105 20:35:15.634154 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:35:25 vultr k3s[18368]: E1105 20:35:25.584075 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:35:25 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:35:25 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:35:25 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:35:25 vultr k3s[18368]: I1105 20:35:25.584116 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:35:31 vultr k3s[18368]: E1105 20:35:31.127190 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:35:31 vultr k3s[18368]: W1105 20:35:31.499617 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:35:34 vultr k3s[18368]: I1105 20:35:34.634259 18368 scope.go:115] "RemoveContainer" containerID="f8908a29d43f5c95c3449f62f4015290ba1cc531d9f4043199ec727bbf9ee5c6"
Nov 05 20:35:34 vultr k3s[18368]: I1105 20:35:34.634601 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:35:34 vultr k3s[18368]: E1105 20:35:34.634919 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:35:36 vultr k3s[18368]: I1105 20:35:36.755989 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:35:37 vultr k3s[18368]: E1105 20:35:37.408059 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:35:39 vultr k3s[18368]: E1105 20:35:39.420082 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:35:39 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:35:39 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:35:39 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:35:39 vultr k3s[18368]: I1105 20:35:39.420119 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:35:39 vultr k3s[18368]: I1105 20:35:39.650499 18368 scope.go:115] "RemoveContainer" containerID="f76bc894a647da8febf88b9e65e1170a5d8e63f034d91e247cbf7b7027a37477"
Nov 05 20:35:39 vultr k3s[18368]: I1105 20:35:39.650910 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:35:39 vultr k3s[18368]: E1105 20:35:39.651363 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:35:42 vultr k3s[18368]: I1105 20:35:42.100812 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:35:42 vultr k3s[18368]: E1105 20:35:42.101397 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:35:48 vultr k3s[18368]: I1105 20:35:48.694902 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:35:48 vultr k3s[18368]: E1105 20:35:48.695207 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:35:53 vultr k3s[18368]: I1105 20:35:53.694918 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:35:53 vultr k3s[18368]: E1105 20:35:53.695406 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:35:55 vultr k3s[18368]: E1105 20:35:55.768045 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:35:55 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:35:55 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:35:55 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:35:55 vultr k3s[18368]: I1105 20:35:55.768091 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:36:01 vultr k3s[18368]: E1105 20:36:01.132803 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:36:01 vultr k3s[18368]: W1105 20:36:01.509835 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:36:03 vultr k3s[18368]: I1105 20:36:03.695036 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:36:03 vultr k3s[18368]: E1105 20:36:03.695346 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.704827 18368 scope.go:115] "RemoveContainer" containerID="1ee9dedd56c2e527be1520354068b187793d292497bda9b1b1f38b55818fc232"
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.705181 18368 scope.go:115] "RemoveContainer" containerID="02ac9efc6e5442542a6b125cc76511b823c38474ee71a8a3a5c264df8c72d151"
Nov 05 20:36:04 vultr k3s[18368]: E1105 20:36:04.705596 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=helm pod=helm-install-traefik-g8wxf_kube-system(366a3583-f186-4b57-a79e-567519874344)\"" pod=">
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.708412 18368 scope.go:115] "RemoveContainer" containerID="89a8654b25eaf52fe9093566c0b17b71c2c9e798061d129f5be93dfa02d61174"
Nov 05 20:36:04 vultr k3s[18368]: E1105 20:36:04.708794 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=helm pod=helm-install-traefik-crd-9jzpf_kube-system(0a20c500-d476-409b-85cc-a36c211bbf61)\"" p>
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.709871 18368 scope.go:115] "RemoveContainer" containerID="1bfeb7abc9b96e5bf87beaa7fa28fbe17af3be196aee7276a997ee8d30cc9daf"
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.719286 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:36:04 vultr k3s[18368]: I1105 20:36:04.731971 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:36:05 vultr k3s[18368]: I1105 20:36:05.728899 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:36:05 vultr k3s[18368]: I1105 20:36:05.739846 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:36:08 vultr k3s[18368]: I1105 20:36:08.694766 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:36:08 vultr k3s[18368]: E1105 20:36:08.695261 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:36:09 vultr k3s[18368]: E1105 20:36:09.604025 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:36:09 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:36:09 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:36:09 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:36:09 vultr k3s[18368]: I1105 20:36:09.604075 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:36:14 vultr k3s[18368]: E1105 20:36:14.571088 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:36:14 vultr k3s[18368]: I1105 20:36:14.571305 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:36:16 vultr k3s[18368]: I1105 20:36:16.694700 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:36:16 vultr k3s[18368]: E1105 20:36:16.695091 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:36:16 vultr k3s[18368]: I1105 20:36:16.695726 18368 scope.go:115] "RemoveContainer" containerID="89a8654b25eaf52fe9093566c0b17b71c2c9e798061d129f5be93dfa02d61174"
Nov 05 20:36:16 vultr k3s[18368]: I1105 20:36:16.712292 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:36:17 vultr k3s[18368]: I1105 20:36:17.773823 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:36:18 vultr k3s[18368]: I1105 20:36:18.782907 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:36:19 vultr k3s[18368]: I1105 20:36:19.695113 18368 scope.go:115] "RemoveContainer" containerID="02ac9efc6e5442542a6b125cc76511b823c38474ee71a8a3a5c264df8c72d151"
Nov 05 20:36:19 vultr k3s[18368]: I1105 20:36:19.710133 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:36:20 vultr k3s[18368]: I1105 20:36:20.695169 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:36:20 vultr k3s[18368]: E1105 20:36:20.695801 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:36:20 vultr k3s[18368]: I1105 20:36:20.783822 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:36:21 vultr k3s[18368]: I1105 20:36:21.793116 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:36:25 vultr k3s[18368]: E1105 20:36:25.980081 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:36:25 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:36:25 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:36:25 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:36:25 vultr k3s[18368]: I1105 20:36:25.980134 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:36:29 vultr k3s[18368]: I1105 20:36:29.694753 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:36:29 vultr k3s[18368]: E1105 20:36:29.695085 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:36:31 vultr k3s[18368]: E1105 20:36:31.140159 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:36:31 vultr k3s[18368]: W1105 20:36:31.520819 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:36:32 vultr k3s[18368]: I1105 20:36:32.694312 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:36:32 vultr k3s[18368]: E1105 20:36:32.694829 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:36:37 vultr k3s[18368]: I1105 20:36:37.415550 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:36:38 vultr k3s[18368]: E1105 20:36:38.044161 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:36:39 vultr k3s[18368]: E1105 20:36:39.768036 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:36:39 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:36:39 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:36:39 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:36:39 vultr k3s[18368]: I1105 20:36:39.768084 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:36:43 vultr k3s[18368]: I1105 20:36:43.694559 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:36:43 vultr k3s[18368]: E1105 20:36:43.694974 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:36:47 vultr k3s[18368]: I1105 20:36:47.695022 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:36:47 vultr k3s[18368]: E1105 20:36:47.695606 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:36:56 vultr k3s[18368]: E1105 20:36:56.156075 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:36:56 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:36:56 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:36:56 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:36:56 vultr k3s[18368]: I1105 20:36:56.156125 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:36:57 vultr k3s[18368]: I1105 20:36:57.695199 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:36:57 vultr k3s[18368]: E1105 20:36:57.695632 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:36:59 vultr k3s[18368]: I1105 20:36:59.694120 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:36:59 vultr k3s[18368]: E1105 20:36:59.694641 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:37:01 vultr k3s[18368]: E1105 20:37:01.147289 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:37:01 vultr k3s[18368]: W1105 20:37:01.532303 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:37:09 vultr k3s[18368]: E1105 20:37:09.976180 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:37:09 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:37:09 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:37:09 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:37:09 vultr k3s[18368]: I1105 20:37:09.976230 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:37:10 vultr k3s[18368]: I1105 20:37:10.694096 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:37:10 vultr k3s[18368]: I1105 20:37:10.694394 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:37:10 vultr k3s[18368]: E1105 20:37:10.694685 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:37:10 vultr k3s[18368]: E1105 20:37:10.694871 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:37:14 vultr k3s[18368]: E1105 20:37:14.570732 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:37:14 vultr k3s[18368]: I1105 20:37:14.570836 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:37:15 vultr k3s[18368]: W1105 20:37:15.633177 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:37:15 vultr k3s[18368]: E1105 20:37:15.633301 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:37:15 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:37:15 vultr k3s[18368]: I1105 20:37:15.633316 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:37:15 vultr k3s[18368]: W1105 20:37:15.634440 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:37:15 vultr k3s[18368]: E1105 20:37:15.634527 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:37:15 vultr k3s[18368]: I1105 20:37:15.634542 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:37:23 vultr k3s[18368]: I1105 20:37:23.694366 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:37:23 vultr k3s[18368]: E1105 20:37:23.694736 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:37:24 vultr k3s[18368]: I1105 20:37:24.694554 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:37:24 vultr k3s[18368]: E1105 20:37:24.695030 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:37:26 vultr k3s[18368]: E1105 20:37:26.356101 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:37:26 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:37:26 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:37:26 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:37:26 vultr k3s[18368]: I1105 20:37:26.356141 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:37:31 vultr k3s[18368]: E1105 20:37:31.153164 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:37:31 vultr k3s[18368]: W1105 20:37:31.542299 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:37:34 vultr k3s[18368]: I1105 20:37:34.694767 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:37:34 vultr k3s[18368]: E1105 20:37:34.695136 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:37:38 vultr k3s[18368]: I1105 20:37:38.055463 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:37:38 vultr k3s[18368]: E1105 20:37:38.692245 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:37:38 vultr k3s[18368]: I1105 20:37:38.694953 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:37:38 vultr k3s[18368]: E1105 20:37:38.695612 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:37:40 vultr k3s[18368]: E1105 20:37:40.188091 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:37:40 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:37:40 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:37:40 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:37:40 vultr k3s[18368]: I1105 20:37:40.188148 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:37:46 vultr k3s[18368]: I1105 20:37:46.694092 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:37:46 vultr k3s[18368]: E1105 20:37:46.694428 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:37:49 vultr k3s[18368]: I1105 20:37:49.694047 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:37:49 vultr k3s[18368]: E1105 20:37:49.694567 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:37:56 vultr k3s[18368]: E1105 20:37:56.540077 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:37:56 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:37:56 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:37:56 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:37:56 vultr k3s[18368]: I1105 20:37:56.540129 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:37:59 vultr k3s[18368]: I1105 20:37:59.694324 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:37:59 vultr k3s[18368]: E1105 20:37:59.694690 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-sys>
Nov 05 20:38:01 vultr k3s[18368]: E1105 20:38:01.159450 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:38:01 vultr k3s[18368]: W1105 20:38:01.552114 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:38:03 vultr k3s[18368]: I1105 20:38:03.694044 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:38:03 vultr k3s[18368]: E1105 20:38:03.694701 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:38:10 vultr k3s[18368]: E1105 20:38:10.376263 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:38:10 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:38:10 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:38:10 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:38:10 vultr k3s[18368]: I1105 20:38:10.376312 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:38:14 vultr k3s[18368]: E1105 20:38:14.571621 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:38:14 vultr k3s[18368]: I1105 20:38:14.571722 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:38:14 vultr k3s[18368]: I1105 20:38:14.694873 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:38:16 vultr k3s[18368]: I1105 20:38:16.695891 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:38:16 vultr k3s[18368]: E1105 20:38:16.698245 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9->
Nov 05 20:38:26 vultr k3s[18368]: E1105 20:38:26.720079 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:38:26 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:38:26 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:38:26 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:38:26 vultr k3s[18368]: I1105 20:38:26.720134 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:38:30 vultr k3s[18368]: I1105 20:38:30.694638 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:38:31 vultr k3s[18368]: E1105 20:38:31.166923 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:38:31 vultr k3s[18368]: W1105 20:38:31.562684 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:38:38 vultr k3s[18368]: I1105 20:38:38.699353 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:38:39 vultr k3s[18368]: E1105 20:38:39.416338 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:38:40 vultr k3s[18368]: E1105 20:38:40.552170 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:38:40 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:38:40 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:38:40 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:38:40 vultr k3s[18368]: I1105 20:38:40.552216 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:38:45 vultr k3s[18368]: I1105 20:38:45.088426 18368 scope.go:115] "RemoveContainer" containerID="2de1065635ee589dcd490fe3066b7463a2fc8a4aff36c86e20dc0046283f6178"
Nov 05 20:38:45 vultr k3s[18368]: I1105 20:38:45.088791 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:38:45 vultr k3s[18368]: E1105 20:38:45.089149 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:38:56 vultr k3s[18368]: E1105 20:38:56.912120 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:38:56 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:38:56 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:38:56 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:38:56 vultr k3s[18368]: I1105 20:38:56.912160 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:38:58 vultr k3s[18368]: I1105 20:38:58.694216 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:38:58 vultr k3s[18368]: E1105 20:38:58.694636 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:39:01 vultr k3s[18368]: E1105 20:39:01.174191 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:39:01 vultr k3s[18368]: W1105 20:39:01.575539 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:39:02 vultr k3s[18368]: I1105 20:39:02.134386 18368 scope.go:115] "RemoveContainer" containerID="098957f88e397524d4fa6d3f6e255d337856b8353bfac623373b2fa8d6e7d9cf"
Nov 05 20:39:02 vultr k3s[18368]: I1105 20:39:02.134913 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:39:02 vultr k3s[18368]: E1105 20:39:02.135503 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:39:09 vultr k3s[18368]: I1105 20:39:09.694443 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:39:09 vultr k3s[18368]: E1105 20:39:09.694789 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:39:10 vultr k3s[18368]: E1105 20:39:10.732054 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:39:10 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:39:10 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:39:10 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:39:10 vultr k3s[18368]: I1105 20:39:10.732089 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:39:12 vultr k3s[18368]: I1105 20:39:12.100737 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:39:12 vultr k3s[18368]: E1105 20:39:12.101463 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:39:12 vultr k3s[18368]: time="2023-11-05T20:39:12Z" level=info msg="COMPACT revision 0 has already been compacted"
Nov 05 20:39:14 vultr k3s[18368]: E1105 20:39:14.570573 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.570696 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.624878 18368 handler.go:232] Adding GroupVersion helm.cattle.io v1 to ResourceManager
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.625230 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.625383 18368 handler.go:232] Adding GroupVersion helm.cattle.io v1 to ResourceManager
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.625430 18368 handler.go:232] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Nov 05 20:39:14 vultr k3s[18368]: E1105 20:39:14.635955 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:39:14 vultr k3s[18368]: I1105 20:39:14.636050 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:39:15 vultr k3s[18368]: W1105 20:39:15.636382 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:39:15 vultr k3s[18368]: E1105 20:39:15.636441 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:39:15 vultr k3s[18368]: W1105 20:39:15.636403 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:39:15 vultr k3s[18368]: I1105 20:39:15.636452 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:39:15 vultr k3s[18368]: E1105 20:39:15.636594 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:39:15 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:39:15 vultr k3s[18368]: I1105 20:39:15.637785 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:39:20 vultr k3s[18368]: I1105 20:39:20.694696 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:39:20 vultr k3s[18368]: E1105 20:39:20.695062 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:39:24 vultr k3s[18368]: I1105 20:39:24.694959 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:39:24 vultr k3s[18368]: E1105 20:39:24.695573 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:39:27 vultr k3s[18368]: E1105 20:39:27.116187 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:39:27 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:39:27 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:39:27 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:39:27 vultr k3s[18368]: I1105 20:39:27.116238 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:39:27 vultr k3s[18368]: I1105 20:39:27.190736 18368 scope.go:115] "RemoveContainer" containerID="89a8654b25eaf52fe9093566c0b17b71c2c9e798061d129f5be93dfa02d61174"
Nov 05 20:39:27 vultr k3s[18368]: I1105 20:39:27.191188 18368 scope.go:115] "RemoveContainer" containerID="150143aee1686c1d4a974ecb72882d6a88b71d7f88a050fcb40c102bb8b7151f"
Nov 05 20:39:27 vultr k3s[18368]: E1105 20:39:27.191639 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=helm pod=helm-install-traefik-crd-9jzpf_kube-system(0a20c500-d476-409b-85cc-a36c211bbf61)\"" p>
Nov 05 20:39:27 vultr k3s[18368]: I1105 20:39:27.205081 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:39:28 vultr k3s[18368]: I1105 20:39:28.214788 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:39:31 vultr k3s[18368]: E1105 20:39:31.181078 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:39:31 vultr k3s[18368]: I1105 20:39:31.204645 18368 scope.go:115] "RemoveContainer" containerID="02ac9efc6e5442542a6b125cc76511b823c38474ee71a8a3a5c264df8c72d151"
Nov 05 20:39:31 vultr k3s[18368]: I1105 20:39:31.205082 18368 scope.go:115] "RemoveContainer" containerID="13d669949ba241c4e3c64ec33a82777aa5d74ffa7d84f3da3090fc54a41c07f2"
Nov 05 20:39:31 vultr k3s[18368]: E1105 20:39:31.205503 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=helm pod=helm-install-traefik-g8wxf_kube-system(366a3583-f186-4b57-a79e-567519874344)\"" pod=">
Nov 05 20:39:31 vultr k3s[18368]: I1105 20:39:31.218311 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:39:31 vultr k3s[18368]: W1105 20:39:31.585047 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:39:32 vultr k3s[18368]: I1105 20:39:32.227012 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:39:34 vultr k3s[18368]: I1105 20:39:34.694789 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:39:34 vultr k3s[18368]: E1105 20:39:34.695121 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:39:37 vultr k3s[18368]: I1105 20:39:37.694506 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:39:37 vultr k3s[18368]: E1105 20:39:37.695082 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:39:39 vultr k3s[18368]: I1105 20:39:39.423515 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:39:40 vultr k3s[18368]: E1105 20:39:40.000234 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:39:40 vultr k3s[18368]: E1105 20:39:40.932014 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:39:40 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:39:40 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:39:40 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:39:40 vultr k3s[18368]: I1105 20:39:40.932061 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:39:41 vultr k3s[18368]: I1105 20:39:41.694267 18368 scope.go:115] "RemoveContainer" containerID="150143aee1686c1d4a974ecb72882d6a88b71d7f88a050fcb40c102bb8b7151f"
Nov 05 20:39:41 vultr k3s[18368]: E1105 20:39:41.694647 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=helm pod=helm-install-traefik-crd-9jzpf_kube-system(0a20c500-d476-409b-85cc-a36c211bbf61)\"" p>
Nov 05 20:39:41 vultr k3s[18368]: I1105 20:39:41.708122 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:39:45 vultr k3s[18368]: I1105 20:39:45.694806 18368 scope.go:115] "RemoveContainer" containerID="13d669949ba241c4e3c64ec33a82777aa5d74ffa7d84f3da3090fc54a41c07f2"
Nov 05 20:39:45 vultr k3s[18368]: E1105 20:39:45.695252 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=helm pod=helm-install-traefik-g8wxf_kube-system(366a3583-f186-4b57-a79e-567519874344)\"" pod=">
Nov 05 20:39:45 vultr k3s[18368]: I1105 20:39:45.710519 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:39:49 vultr k3s[18368]: I1105 20:39:49.694523 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:39:49 vultr k3s[18368]: I1105 20:39:49.694700 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:39:49 vultr k3s[18368]: E1105 20:39:49.695092 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:39:49 vultr k3s[18368]: E1105 20:39:49.695117 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:39:53 vultr k3s[18368]: I1105 20:39:53.694438 18368 scope.go:115] "RemoveContainer" containerID="150143aee1686c1d4a974ecb72882d6a88b71d7f88a050fcb40c102bb8b7151f"
Nov 05 20:39:54 vultr k3s[18368]: I1105 20:39:54.278773 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:39:55 vultr k3s[18368]: I1105 20:39:55.288481 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik-crd
Nov 05 20:39:57 vultr k3s[18368]: E1105 20:39:57.284281 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:39:57 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:39:57 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:39:57 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:39:57 vultr k3s[18368]: I1105 20:39:57.284330 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:39:57 vultr k3s[18368]: I1105 20:39:57.694863 18368 scope.go:115] "RemoveContainer" containerID="13d669949ba241c4e3c64ec33a82777aa5d74ffa7d84f3da3090fc54a41c07f2"
Nov 05 20:39:58 vultr k3s[18368]: I1105 20:39:58.289610 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:39:59 vultr k3s[18368]: I1105 20:39:59.299016 18368 job_controller.go:523] enqueueing job kube-system/helm-install-traefik
Nov 05 20:40:00 vultr k3s[18368]: I1105 20:40:00.695281 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:40:00 vultr k3s[18368]: E1105 20:40:00.695878 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:40:01 vultr k3s[18368]: E1105 20:40:01.188480 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:40:01 vultr k3s[18368]: W1105 20:40:01.595663 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:40:03 vultr k3s[18368]: I1105 20:40:03.694523 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:40:03 vultr k3s[18368]: E1105 20:40:03.694986 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:40:11 vultr k3s[18368]: E1105 20:40:11.136065 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:40:11 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:40:11 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:40:11 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:40:11 vultr k3s[18368]: I1105 20:40:11.136111 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:40:13 vultr k3s[18368]: I1105 20:40:13.694568 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:40:13 vultr k3s[18368]: E1105 20:40:13.695095 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:40:14 vultr k3s[18368]: E1105 20:40:14.571360 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:40:14 vultr k3s[18368]: I1105 20:40:14.571517 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:40:15 vultr k3s[18368]: W1105 20:40:15.637143 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:40:15 vultr k3s[18368]: E1105 20:40:15.637210 18368 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Nov 05 20:40:15 vultr k3s[18368]: I1105 20:40:15.637225 18368 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:40:15 vultr k3s[18368]: W1105 20:40:15.638258 18368 handler_proxy.go:100] no RequestInfo found in the context
Nov 05 20:40:15 vultr k3s[18368]: E1105 20:40:15.638351 18368 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Nov 05 20:40:15 vultr k3s[18368]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Nov 05 20:40:15 vultr k3s[18368]: I1105 20:40:15.638376 18368 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 05 20:40:15 vultr k3s[18368]: I1105 20:40:15.694399 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:40:15 vultr k3s[18368]: E1105 20:40:15.694818 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:40:26 vultr k3s[18368]: I1105 20:40:26.694696 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:40:26 vultr k3s[18368]: E1105 20:40:26.695200 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:40:27 vultr k3s[18368]: E1105 20:40:27.456059 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:40:27 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:40:27 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:40:27 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:40:27 vultr k3s[18368]: I1105 20:40:27.456099 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:40:29 vultr k3s[18368]: I1105 20:40:29.694961 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:40:29 vultr k3s[18368]: E1105 20:40:29.695362 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:40:31 vultr k3s[18368]: E1105 20:40:31.195714 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:40:31 vultr k3s[18368]: W1105 20:40:31.607789 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:40:39 vultr k3s[18368]: I1105 20:40:39.695025 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:40:39 vultr k3s[18368]: E1105 20:40:39.695693 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:40:40 vultr k3s[18368]: I1105 20:40:40.008430 18368 iptables.go:421] Some iptables rules are missing; deleting and recreating rules
Nov 05 20:40:40 vultr k3s[18368]: E1105 20:40:40.636491 18368 iptables.go:320] Failed to ensure iptables rules: error setting up rules: failed to apply partial iptables-restore unable to run iptables-restore (, ): exit status 4
Nov 05 20:40:41 vultr k3s[18368]: E1105 20:40:41.327982 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:40:41 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:40:41 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:40:41 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:40:41 vultr k3s[18368]: I1105 20:40:41.328018 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:40:43 vultr k3s[18368]: I1105 20:40:43.693982 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:40:43 vultr k3s[18368]: E1105 20:40:43.694304 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:40:53 vultr k3s[18368]: I1105 20:40:53.694568 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:40:53 vultr k3s[18368]: E1105 20:40:53.695234 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:40:57 vultr k3s[18368]: E1105 20:40:57.640119 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:40:57 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:40:57 vultr k3s[18368]: ip6tables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:40:57 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:40:57 vultr k3s[18368]: I1105 20:40:57.640169 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:40:58 vultr k3s[18368]: I1105 20:40:58.695066 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:40:58 vultr k3s[18368]: E1105 20:40:58.695515 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:41:01 vultr k3s[18368]: E1105 20:41:01.202590 18368 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Nov 05 20:41:01 vultr k3s[18368]: W1105 20:41:01.619825 18368 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
Nov 05 20:41:08 vultr k3s[18368]: I1105 20:41:08.695231 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:41:08 vultr k3s[18368]: E1105 20:41:08.695987 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
Nov 05 20:41:11 vultr k3s[18368]: E1105 20:41:11.508171 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:41:11 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:41:11 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:41:11 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:41:11 vultr k3s[18368]: I1105 20:41:11.508235 18368 proxier.go:862] "Sync failed" retryingTime="30s"
Nov 05 20:41:13 vultr k3s[18368]: I1105 20:41:13.694194 18368 scope.go:115] "RemoveContainer" containerID="f024bdabb8ca93817709c2d1fb2e5fd6e35f7811acd830de8b02766a4c96c5a6"
Nov 05 20:41:13 vultr k3s[18368]: E1105 20:41:13.694672 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-957fdf8bc-tbl99_kube-syst>
Nov 05 20:41:14 vultr k3s[18368]: E1105 20:41:14.571320 18368 handler_proxy.go:144] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Nov 05 20:41:14 vultr k3s[18368]: I1105 20:41:14.571468 18368 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Nov 05 20:41:20 vultr k3s[18368]: I1105 20:41:20.694692 18368 scope.go:115] "RemoveContainer" containerID="c8623f384815d725935fda63647a529dab4f29cf294770409d4796d2bb5c85c2"
Nov 05 20:41:20 vultr k3s[18368]: E1105 20:41:20.695304 18368 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=metrics-server pod=metrics-server-648b5df564-89n86_kube-system(aea9ccbd-3d9f-4cd9-a>
root@vultr:~#
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment