Created
June 28, 2019 18:26
-
-
Save tdewitt/75e5342f85b3f6f9d0f5ba3af2d1d685 to your computer and use it in GitHub Desktop.
journalctl -xe -u k3s -f
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Jun 28 19:23:32 k3s-01 k3s[708]: time="2019-06-28T19:23:32.523508394+01:00" level=info msg="Starting k3s v0.6.1 (7ffe802a)" | |
Jun 28 19:23:32 k3s-01 k3s[708]: time="2019-06-28T19:23:32.527852038+01:00" level=info msg="Running kube-apiserver --bind-address=127.0.0.1 --api-audiences=unknown --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node-1.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --authorization-mode=Node,RBAC --advertise-address=127.0.0.1 --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --advertise-port=6445 --insecure-port=0 --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --secure-port=6444 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --service-account-issuer=k3s --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --watch-cache=false --allow-privileged=true --requestheader-allowed-names=kubernetes-proxy --requestheader-group-headers=X-Remote-Group --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-username-headers=X-Remote-User --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt" | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.556689 708 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.558675 708 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.558894 708 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.559229 708 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.559507 708 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.559801 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: W0628 19:23:32.752549 708 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources. | |
Jun 28 19:23:32 k3s-01 k3s[708]: W0628 19:23:32.794474 708 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.908042 708 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.908777 708 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.909357 708 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.909882 708 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.910367 708 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: E0628 19:23:32.910816 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
Jun 28 19:23:32 k3s-01 k3s[708]: time="2019-06-28T19:23:32.932144393+01:00" level=info msg="Running kube-scheduler --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --leader-elect=false" | |
Jun 28 19:23:32 k3s-01 k3s[708]: time="2019-06-28T19:23:32.934383668+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --root-ca-file=/var/lib/rancher/k3s/server/tls/token-ca.crt --port=10252 --secure-port=0 --cluster-cidr=10.42.0.0/16 --leader-elect=false" | |
Jun 28 19:23:33 k3s-01 k3s[708]: W0628 19:23:33.035992 708 authorization.go:47] Authorization is disabled | |
Jun 28 19:23:33 k3s-01 k3s[708]: W0628 19:23:33.036071 708 authentication.go:55] Authentication is disabled | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.762266 708 controller.go:148] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service | |
Jun 28 19:23:33 k3s-01 k3s[708]: time="2019-06-28T19:23:33.980573986+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz" | |
Jun 28 19:23:33 k3s-01 k3s[708]: time="2019-06-28T19:23:33.982502085+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" | |
Jun 28 19:23:33 k3s-01 k3s[708]: time="2019-06-28T19:23:33.985085591+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988050 708 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988206 708 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988402 708 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988628 708 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988764 708 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.988877 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: E0628 19:23:33.989107 708 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name | |
Jun 28 19:23:33 k3s-01 k3s[708]: time="2019-06-28T19:23:33.989170865+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.008003 708 autoregister_controller.go:193] v1.k3s.cattle.io failed with : apiservices.apiregistration.k8s.io "v1.k3s.cattle.io" already exists | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.016075186+01:00" level=info msg="Listening on :6443" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.016866 708 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.017162 708 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.018046 708 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.018819 708 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.019581 708 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.019719 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.020136 708 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.020206019+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.521546859+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.622034843+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller" | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.623647392+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token" | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.623749951+01:00" level=info msg="To join node to cluster: k3s agent -s https://10.42.1.51:6443 -t ${NODE_TOKEN}" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.638595 708 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.638710 708 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.638928 708 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.639609 708 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.639946 708 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.640197 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.640516 708 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.640584863+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.641669 708 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.641789 708 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.642189 708 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.642869 708 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.643806 708 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.643963 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.644471 708 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.644568447+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.645402 708 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.645740 708 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.646202 708 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.646685 708 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.647039 708 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.647377 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.647956 708 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.648051531+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649274 708 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649392 708 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649604 708 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649751 708 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649882 708 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.649973 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.650185 708 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.650225834+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.650617 708 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.650702 708 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.650899 708 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.651041 708 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.651141 708 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.651462 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.651623 708 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.651677071+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652076 708 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652184 708 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652359 708 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652516 708 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652627 708 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652731 708 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: E0628 19:23:34.652921 708 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name | |
Jun 28 19:23:34 k3s-01 k3s[708]: time="2019-06-28T19:23:34.652965433+01:00" level=info msg="Setting up event handlers" | |
Jun 28 19:23:35 k3s-01 k3s[708]: W0628 19:23:35.162217 708 controllermanager.go:445] Skipping "csrsigning" | |
Jun 28 19:23:35 k3s-01 k3s[708]: time="2019-06-28T19:23:35.295123216+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" | |
Jun 28 19:23:35 k3s-01 k3s[708]: time="2019-06-28T19:23:35.295687780+01:00" level=info msg="Run: k3s kubectl" | |
Jun 28 19:23:35 k3s-01 k3s[708]: time="2019-06-28T19:23:35.296033894+01:00" level=info msg="k3s is up and running" | |
Jun 28 19:23:35 k3s-01 k3s[708]: time="2019-06-28T19:23:35.781433961+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" | |
Jun 28 19:23:36 k3s-01 k3s[708]: time="2019-06-28T19:23:36.112735235+01:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" | |
Jun 28 19:23:36 k3s-01 k3s[708]: time="2019-06-28T19:23:36.113728467+01:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" | |
Jun 28 19:23:36 k3s-01 k3s[708]: time="2019-06-28T19:23:36.119831513+01:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\"" | |
Jun 28 19:23:36 k3s-01 k3s[708]: time="2019-06-28T19:23:36.288756524+01:00" level=info msg="Starting batch/v1, Kind=Job controller" | |
Jun 28 19:23:36 k3s-01 k3s[708]: time="2019-06-28T19:23:36.989945944+01:00" level=info msg="Starting /v1, Kind=Node controller" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.090081036+01:00" level=info msg="Starting /v1, Kind=Service controller" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.123263693+01:00" level=info msg="module br_netfilter was already loaded" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.123540009+01:00" level=info msg="module overlay was already loaded" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.123865547+01:00" level=info msg="module nf_conntrack was already loaded" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.127209730+01:00" level=info msg="Connecting to wss://localhost:6443/v1-k3s/connect" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.127304110+01:00" level=info msg="Connecting to proxy" url="wss://localhost:6443/v1-k3s/connect" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.190252527+01:00" level=info msg="Starting /v1, Kind=Pod controller" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.203065639+01:00" level=info msg="Handling backend connection request [k3s-01]" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.216095793+01:00" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.216727232+01:00" level=info msg="Running kubelet --cluster-domain=cluster.local --address=0.0.0.0 --tls-private-key-file=/var/lib/rancher/k3s/agent/token-node.key --cpu-cfs-quota=false --healthz-bind-address=127.0.0.1 --eviction-hard=imagefs.available<5%,nodefs.available<5% --cni-bin-dir=/var/lib/rancher/k3s/data/851e5f8445c14de3de589e307de8893a789cfadd7bdd5d0683cd7629b9f0684b/bin --resolv-conf=/etc/resolv.conf --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --kubeconfig=/var/lib/rancher/k3s/agent/kubeconfig.yaml --root-dir=/var/lib/rancher/k3s/agent/kubelet --cluster-dns=10.43.0.10 --cgroup-driver=cgroupfs --authentication-token-webhook=true --authorization-mode=Webhook --read-only-port=0 --serialize-image-pulls=false --kubelet-cgroups=/systemd/system.slice --tls-cert-file=/var/lib/rancher/k3s/agent/token-node.crt --fail-swap-on=false --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --anonymous-auth=false --hostname-override=k3s-01 --node-labels=node-role.kubernetes.io/master=true --allow-privileged=true --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.pem --runtime-cgroups=/systemd/system.slice" | |
Jun 28 19:23:37 k3s-01 k3s[708]: Flag --allow-privileged has been deprecated, will be removed in a future version | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.218385 708 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master] | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.238284 708 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os) | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.239101 708 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. | |
Jun 28 19:23:37 k3s-01 k3s[708]: time="2019-06-28T19:23:37.290505012+01:00" level=info msg="Starting /v1, Kind=Endpoints controller" | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.338389 708 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master] | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.338506 708 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os) | |
Jun 28 19:23:37 k3s-01 k3s[708]: E0628 19:23:37.361817 708 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory | |
Jun 28 19:23:37 k3s-01 k3s[708]: E0628 19:23:37.603075 708 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. | |
Jun 28 19:23:37 k3s-01 k3s[708]: E0628 19:23:37.604693 708 kubelet.go:1250] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem | |
Jun 28 19:23:37 k3s-01 k3s[708]: W0628 19:23:37.638428 708 nvidia.go:66] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory | |
Jun 28 19:23:38 k3s-01 k3s[708]: W0628 19:23:38.034748 708 shared_informer.go:312] resyncPeriod 73476016335487 is smaller than resyncCheckPeriod 77442156096150 and the informer has already started. Changing it to 77442156096150 | |
Jun 28 19:23:38 k3s-01 k3s[708]: E0628 19:23:38.035574 708 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs"] | |
Jun 28 19:23:38 k3s-01 k3s[708]: W0628 19:23:38.425228 708 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope: no such file or directory | |
Jun 28 19:23:38 k3s-01 k3s[708]: W0628 19:23:38.425439 708 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope: no such file or directory | |
Jun 28 19:23:38 k3s-01 k3s[708]: W0628 19:23:38.425540 708 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope: no such file or directory | |
Jun 28 19:23:38 k3s-01 k3s[708]: W0628 19:23:38.425638 708 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r969b551eb1fe4438993189015924e9eb.scope: no such file or directory | |
Jun 28 19:23:39 k3s-01 k3s[708]: W0628 19:23:39.232225 708 pod_container_deletor.go:75] Container "12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8" not found in pod's containers | |
Jun 28 19:23:39 k3s-01 k3s[708]: W0628 19:23:39.232356 708 pod_container_deletor.go:75] Container "8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb" not found in pod's containers | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.570123 708 remote_runtime.go:132] StopPodSandbox "8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find sandbox "8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb": does not exist | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.570586 708 kuberuntime_manager.go:846] Failed to stop sandbox {"containerd" "8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb"} | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.570866 708 kuberuntime_manager.go:641] killPodWithSyncResult failed: failed to "KillPodSandbox" for "367cf1ba-99bb-11e9-853c-b827eb29634e" with KillPodSandboxError: "rpc error: code = Unknown desc = an error occurred when try to find sandbox \"8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb\": does not exist" | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.571130 708 pod_workers.go:190] Error syncing pod 367cf1ba-99bb-11e9-853c-b827eb29634e ("helm-install-traefik-n27nt_kube-system(367cf1ba-99bb-11e9-853c-b827eb29634e)"), skipping: failed to "KillPodSandbox" for "367cf1ba-99bb-11e9-853c-b827eb29634e" with KillPodSandboxError: "rpc error: code = Unknown desc = an error occurred when try to find sandbox \"8c8229cfc706ec5bd4340ac9c58e7b9eecc6903b68a77ebc9db58de029d27aeb\": does not exist" | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.571290 708 remote_runtime.go:132] StopPodSandbox "12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find sandbox "12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8": does not exist | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.571392 708 kuberuntime_manager.go:846] Failed to stop sandbox {"containerd" "12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8"} | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.571584 708 kuberuntime_manager.go:641] killPodWithSyncResult failed: failed to "KillPodSandbox" for "36addacb-99bb-11e9-853c-b827eb29634e" with KillPodSandboxError: "rpc error: code = Unknown desc = an error occurred when try to find sandbox \"12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8\": does not exist" | |
Jun 28 19:23:39 k3s-01 k3s[708]: E0628 19:23:39.571683 708 pod_workers.go:190] Error syncing pod 36addacb-99bb-11e9-853c-b827eb29634e ("coredns-695688789-nvd6b_kube-system(36addacb-99bb-11e9-853c-b827eb29634e)"), skipping: failed to "KillPodSandbox" for "36addacb-99bb-11e9-853c-b827eb29634e" with KillPodSandboxError: "rpc error: code = Unknown desc = an error occurred when try to find sandbox \"12cbbc04e33190f3897811d7c07727dc1ff68eecc3b65c278d7a0e1cdf4f40b8\": does not exist" | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.077864 708 controllermanager.go:445] Skipping "root-ca-cert-publisher" | |
Jun 28 19:23:48 k3s-01 k3s[708]: E0628 19:23:48.187926 708 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"] | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.594730 708 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-01" does not exist | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.594951 708 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-02" does not exist | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.595017 708 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-03" does not exist | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.595104 708 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-04" does not exist | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.684551 708 node_lifecycle_controller.go:833] Missing timestamp for Node k3s-01. Assuming now as a timestamp. | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.685623 708 node_lifecycle_controller.go:833] Missing timestamp for Node k3s-02. Assuming now as a timestamp. | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.687021 708 node_lifecycle_controller.go:833] Missing timestamp for Node k3s-03. Assuming now as a timestamp. | |
Jun 28 19:23:48 k3s-01 k3s[708]: W0628 19:23:48.687260 708 node_lifecycle_controller.go:833] Missing timestamp for Node k3s-04. Assuming now as a timestamp. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment