Skip to content

Instantly share code, notes, and snippets.

@rgl
Last active July 17, 2019 21:41
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rgl/9e2273adfda8a9c0315a0c1d574c8ccd to your computer and use it in GitHub Desktop.
Save rgl/9e2273adfda8a9c0315a0c1d574c8ccd to your computer and use it in GitHub Desktop.
k3s-server fails to start with --disable-agent

Errors while installing k3s with --disable-agent.

I'm using my rgl/k3s-vagrant environment to try this out. I've used the following command to start the vagrant environment:

vagrant up --provider=libvirt s1 # see https://github.com/rgl/k3s-vagrant/blob/master/provision-k3s-server.sh

which will eventually install k3s and error out:

    s1: + k3s_version=v0.7.0-rc8
    s1: + shift
    s1: + k3s_cluster_secret=7e982a7bbac5f385ecbb988f800787bc9bb617552813a63c4469521c53d83b6e
    s1: + shift
    s1: + ip_address=10.11.0.101
    s1: + shift
    s1: + cat
    s1: + curl -sfL https://raw.githubusercontent.com/rancher/k3s/v0.7.0-rc8/install.sh
    s1: + INSTALL_K3S_VERSION=v0.7.0-rc8
    s1: + K3S_CLUSTER_SECRET=7e982a7bbac5f385ecbb988f800787bc9bb617552813a63c4469521c53d83b6e
    s1: + sh -s -- server --disable-agent --node-ip 10.11.0.101 --cluster-cidr 10.12.0.0/16 --service-cidr 10.13.0.0/16 --cluster-dns 10.13.0.10 --cluster-domain cluster.local --flannel-iface eth1
    s1: [INFO]  Using v0.7.0-rc8 as release
    s1: [INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.7.0-rc8/sha256sum-amd64.txt
    s1: [INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.7.0-rc8/k3s
    s1: [INFO]  Verifying binary download
    s1: [INFO]  Installing k3s to /usr/local/bin/k3s
    s1: [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
    s1: [INFO]  Creating /usr/local/bin/crictl symlink to k3s
    s1: [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
    s1: [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    s1: [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
    s1: [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
    s1: [INFO]  systemd: Enabling k3s unit
    s1: Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    s1: [INFO]  systemd: Starting k3s
    s1: + systemctl cat k3s
    s1: # /etc/systemd/system/k3s.service
    s1: [Unit]
    s1: Description=Lightweight Kubernetes
    s1: Documentation=https://k3s.io
    s1: After=network-online.target
    s1: 
    s1: [Service]
    s1: Type=notify
    s1: EnvironmentFile=/etc/systemd/system/k3s.service.env
    s1: ExecStartPre=-/sbin/modprobe br_netfilter
    s1: ExecStartPre=-/sbin/modprobe overlay
    s1: ExecStart=/usr/local/bin/k3s \
    s1:     server \
    s1: 	'--disable-agent' \
    s1: 	'--node-ip' \
    s1: 	'10.11.0.101' \
    s1: 	'--cluster-cidr' \
    s1: 	'10.12.0.0/16' \
    s1: 	'--service-cidr' \
    s1: 	'10.13.0.0/16' \
    s1: 	'--cluster-dns' \
    s1: 	'10.13.0.10' \
    s1: 	'--cluster-domain' \
    s1: 	'cluster.local' \
    s1: 	'--flannel-iface' \
    s1: 	'eth1' \
    s1: 
    s1: KillMode=process
    s1: Delegate=yes
    s1: LimitNOFILE=infinity
    s1: LimitNPROC=infinity
    s1: LimitCORE=infinity
    s1: TasksMax=infinity
    s1: TimeoutStartSec=0
    s1: Restart=always
    s1: 
    s1: [Install]
    s1: WantedBy=multi-user.target
    s1: + /bin/bash -c 'node_name=$(hostname); echo "waiting for node $node_name to be ready..."; while [ -z "$(kubectl get nodes $node_name | grep -E "$node_name\s+Ready\s+")" ]; do sleep 3; done; echo "node ready!"'
    s1: waiting for node s1 to be ready...
    s1: Error from server (NotFound): nodes "s1" not found
    s1: Error from server (NotFound): nodes "s1" not found
    s1: Error from server (NotFound): nodes "s1" not found
    s1: Error from server (NotFound): nodes "s1" not found

The problem manifests itself by kubectl get nodes never returning anything.

These are the k3s logs:

root@s1:~# journalctl -u k3s --no-pager
-- Logs begin at Wed 2019-07-17 22:13:53 WEST, end at Wed 2019-07-17 22:19:53 WEST. --
Jul 17 22:16:50 s1 systemd[1]: Starting Lightweight Kubernetes...
Jul 17 22:16:50 s1 k3s[1799]: time="2019-07-17T22:16:50+01:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/d4c618b8ac6283b57e1508fc0576ca3fa03418ca7c30703a2263a3f3ac3a9d97"
Jul 17 22:16:52 s1 k3s[1799]: time="2019-07-17T22:16:52.034670496+01:00" level=info msg="Starting k3s v0.7.0-rc8 (13845df0)"
Jul 17 22:16:52 s1 k3s[1799]: time="2019-07-17T22:16:52.760497581+01:00" level=info msg="Running kube-apiserver --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --api-audiences=unknown --requestheader-username-headers=X-Remote-User --allow-privileged=true --authorization-mode=Node,RBAC --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key --service-account-issuer=k3s --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --enable-admission-plugins=NodeRestriction --advertise-port=6443 --secure-port=6444 --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --advertise-address=10.11.0.101 --insecure-port=0 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-group-headers=X-Remote-Group --bind-address=127.0.0.1 --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --service-cluster-ip-range=10.13.0.0/16 --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --requestheader-allowed-names=system:auth-proxy"
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390259    1799 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390645    1799 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390703    1799 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390745    1799 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390777    1799 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.390803    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: W0717 22:16:53.550692    1799 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
Jul 17 22:16:53 s1 k3s[1799]: W0717 22:16:53.560765    1799 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.592861    1799 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.593307    1799 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.593534    1799 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.593695    1799 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.593839    1799 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.593987    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.603994736+01:00" level=info msg="Running kube-scheduler --leader-elect=false --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig"
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.610558620+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --port=10252 --secure-port=0 --use-service-account-credentials=true --bind-address=127.0.0.1 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.12.0.0/16 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --leader-elect=false"
Jul 17 22:16:53 s1 k3s[1799]: W0717 22:16:53.617358    1799 authorization.go:47] Authorization is disabled
Jul 17 22:16:53 s1 k3s[1799]: W0717 22:16:53.617382    1799 authentication.go:55] Authentication is disabled
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.709932513+01:00" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.733805122+01:00" level=info msg="Creating CRD addons.k3s.cattle.io"
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.734200    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.734412    1799 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.734601    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.734774    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.734967    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.735180    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.735339    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.735507    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.735683    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.735877    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.737794486+01:00" level=info msg="Creating CRD helmcharts.helm.cattle.io"
Jul 17 22:16:53 s1 k3s[1799]: time="2019-07-17T22:16:53.752807730+01:00" level=info msg="Waiting for CRD listenerconfigs.k3s.cattle.io to become available"
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.847818    1799 controller.go:147] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.13.0.1": cannot allocate resources of type serviceipallocations at this time
Jul 17 22:16:53 s1 k3s[1799]: E0717 22:16:53.848382    1799 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.11.0.101, ResourceVersion: 0, AdditionalErrorMsg:
Jul 17 22:16:54 s1 k3s[1799]: time="2019-07-17T22:16:54.259832471+01:00" level=info msg="Done waiting for CRD listenerconfigs.k3s.cattle.io to become available"
Jul 17 22:16:54 s1 k3s[1799]: time="2019-07-17T22:16:54.260170429+01:00" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.736136    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.743370    1799 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.753603    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: time="2019-07-17T22:16:54.764253717+01:00" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
Jul 17 22:16:54 s1 k3s[1799]: time="2019-07-17T22:16:54.764638597+01:00" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.765078    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.775135    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.777828    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.785313    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.794372    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.794665    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Jul 17 22:16:54 s1 k3s[1799]: E0717 22:16:54.802627    1799 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.267658485+01:00" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.279811965+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.280266398+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.280437843+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.280565322+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.280802    1799 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.280841    1799 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.280873    1799 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.280945    1799 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.281009    1799 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.281028    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.281162    1799 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.287839047+01:00" level=info msg="Listening on :6443"
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.298150    1799 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.298185    1799 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.298208    1799 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.298230    1799 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.298248    1799 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.299836    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.299926    1799 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.802774437+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.904910025+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.904945792+01:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.121.143:6443 -t ${NODE_TOKEN}"
Jul 17 22:16:55 s1 k3s[1799]: time="2019-07-17T22:16:55.908703401+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.942980    1799 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.943277    1799 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.943476    1799 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.943653    1799 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.943851    1799 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.944028    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.944222    1799 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.944769    1799 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.944985    1799 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.945171    1799 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.945362    1799 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.945541    1799 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.945721    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.945926    1799 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.946181    1799 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.946357    1799 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.946555    1799 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.946749    1799 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.946934    1799 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.947107    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.947292    1799 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.947584    1799 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.947814    1799 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.948011    1799 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.948182    1799 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.948354    1799 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.948541    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.948784    1799 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.949057    1799 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.949242    1799 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.949479    1799 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.949711    1799 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.950021    1799 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.950287    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.950579    1799 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.950928    1799 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.951168    1799 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.951418    1799 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.951660    1799 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.951842    1799 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.951993    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Jul 17 22:16:55 s1 k3s[1799]: E0717 22:16:55.952168    1799 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Jul 17 22:16:56 s1 k3s[1799]: time="2019-07-17T22:16:56.163320005+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jul 17 22:16:56 s1 k3s[1799]: time="2019-07-17T22:16:56.163369216+01:00" level=info msg="Run: k3s kubectl"
Jul 17 22:16:56 s1 k3s[1799]: time="2019-07-17T22:16:56.163381636+01:00" level=info msg="k3s is up and running"
Jul 17 22:16:56 s1 systemd[1]: Started Lightweight Kubernetes.
Jul 17 22:16:56 s1 k3s[1799]: time="2019-07-17T22:16:56.980609761+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Jul 17 22:16:57 s1 k3s[1799]: W0717 22:16:57.084856    1799 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.11.0.101]
Jul 17 22:16:57 s1 k3s[1799]: time="2019-07-17T22:16:57.489604944+01:00" level=info msg="Starting batch/v1, Kind=Job controller"
Jul 17 22:16:58 s1 k3s[1799]: time="2019-07-17T22:16:58.195619784+01:00" level=info msg="Starting /v1, Kind=Service controller"
Jul 17 22:16:58 s1 k3s[1799]: time="2019-07-17T22:16:58.295927538+01:00" level=info msg="Starting /v1, Kind=Pod controller"
Jul 17 22:16:58 s1 k3s[1799]: time="2019-07-17T22:16:58.396134020+01:00" level=info msg="Starting /v1, Kind=Endpoints controller"
Jul 17 22:16:58 s1 k3s[1799]: time="2019-07-17T22:16:58.496321945+01:00" level=info msg="Starting /v1, Kind=Node controller"
Jul 17 22:17:08 s1 k3s[1799]: W0717 22:17:08.681017    1799 shared_informer.go:312] resyncPeriod 64820815310104 is smaller than resyncCheckPeriod 78963946359852 and the informer has already started. Changing it to 78963946359852
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.683455    1799 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts"]
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724468    1799 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724519    1799 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724576    1799 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724631    1799 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724664    1799 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724692    1799 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: E0717 22:17:08.724746    1799 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted
Jul 17 22:17:08 s1 k3s[1799]: W0717 22:17:08.724789    1799 controllermanager.go:445] Skipping "root-ca-cert-publisher"
Jul 17 22:17:08 s1 k3s[1799]: W0717 22:17:08.903557    1799 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 17 22:17:12 s1 k3s[1799]: E0717 22:17:12.004615    1799 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
Jul 17 22:17:12 s1 k3s[1799]: E0717 22:17:12.195658    1799 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
Jul 17 22:17:12 s1 k3s[1799]: E0717 22:17:12.239054    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:17:12 s1 k3s[1799]: E0717 22:17:12.254574    1799 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
Jul 17 22:17:12 s1 k3s[1799]: E0717 22:17:12.259620    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:18:23 s1 k3s[1799]: E0717 22:18:23.618554    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:18:23 s1 k3s[1799]: E0717 22:18:23.619903    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:19:53 s1 k3s[1799]: E0717 22:19:53.619699    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:19:53 s1 k3s[1799]: E0717 22:19:53.625798    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods

It seems the sockets are open:

root@s1:~# ss -n --tcp --listening --processes
State             Recv-Q            Send-Q                       Local Address:Port                        Peer Address:Port                                                                                
LISTEN            0                 64                                 0.0.0.0:43153                            0.0.0.0:*                                                                                   
LISTEN            0                 128                                0.0.0.0:22                               0.0.0.0:*               users:(("sshd",pid=493,fd=3))                                       
LISTEN            0                 128                                0.0.0.0:57195                            0.0.0.0:*               users:(("rpc.statd",pid=1063,fd=9))                                 
LISTEN            0                 128                              127.0.0.1:6444                             0.0.0.0:*               users:(("k3s-server",pid=1799,fd=5))                                
LISTEN            0                 128                                0.0.0.0:111                              0.0.0.0:*               users:(("rpcbind",pid=222,fd=4),("systemd",pid=1,fd=66))            
LISTEN            0                 128                                   [::]:22                                  [::]:*               users:(("sshd",pid=493,fd=4))                                       
LISTEN            0                 128                                   [::]:40673                               [::]:*               users:(("rpc.statd",pid=1063,fd=11))                                
LISTEN            0                 128                                      *:6443                                   *:*               users:(("k3s-server",pid=1799,fd=13))                               
LISTEN            0                 128                                      *:10251                                  *:*               users:(("k3s-server",pid=1799,fd=18))                               
LISTEN            0                 64                                    [::]:42859                               [::]:*                                                                                   
LISTEN            0                 128                                      *:10252                                  *:*               users:(("k3s-server",pid=1799,fd=37))                               
LISTEN            0                 128                                   [::]:111                                 [::]:*               users:(("rpcbind",pid=222,fd=6),("systemd",pid=1,fd=68))       

systemctl shows it running:

root@s1:~# systemctl status k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-07-17 22:16:56 WEST; 24min ago
     Docs: https://k3s.io
  Process: 1795 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 1797 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 1799 (k3s-server)
    Tasks: 13
   Memory: 204.5M
   CGroup: /system.slice/k3s.service
           └─1799 /usr/local/bin/k3s server --disable-agent --node-ip 10.11.0.101 --cluster-cidr 10.12.0.0/16 --service-cidr 10.13.0.0/16 --cluster-dns 10.13.0.10 --cluster-domain cluster.local --flannel-

Jul 17 22:34:53 s1 k3s[1799]: E0717 22:34:53.628487    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:34:53 s1 k3s[1799]: E0717 22:34:53.629564    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods
Jul 17 22:36:23 s1 k3s[1799]: E0717 22:36:23.629381    1799 scheduler.go:481] error selecting node for pod: no nodes available to schedule pods

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment