Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@alexellis
Last active November 1, 2019 21:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alexellis/803b46a4e383ff50279ab6ef537b555a to your computer and use it in GitHub Desktop.
Save alexellis/803b46a4e383ff50279ab6ef537b555a to your computer and use it in GitHub Desktop.
k3s-error-log.txt
-- Logs begin at Thu 2019-04-11 16:28:37 UTC, end at Fri 2019-11-01 21:36:28 UTC. --
Nov 01 21:16:15 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:16:15 ubuntu k3s[1976]: time="2019-11-01T21:16:15Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/d4f98b2da1abb7c68677600aa20f0b207b7bd6614825307f0c6684cb84a6abed"
Nov 01 21:16:19 ubuntu k3s[1976]: time="2019-11-01T21:16:19.609558579Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:16:19 ubuntu k3s[1976]: time="2019-11-01T21:16:19.643640499Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:16:19 ubuntu k3s[1976]: time="2019-11-01T21:16:19.646841467Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:16:20 ubuntu k3s[1976]: time="2019-11-01T21:16:20.522263214Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:16:20 ubuntu k3s[1976]: I1101 21:16:20.523696 1976 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:16:20 ubuntu k3s[1976]: I1101 21:16:20.524338 1976 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:16:21 ubuntu k3s[1976]: I1101 21:16:21.746787 1976 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:21 ubuntu k3s[1976]: I1101 21:16:21.746858 1976 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749140 1976 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749243 1976 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749313 1976 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749374 1976 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749450 1976 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749509 1976 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749566 1976 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749620 1976 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749743 1976 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749842 1976 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749909 1976 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: E1101 21:16:21.749972 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:21 ubuntu k3s[1976]: I1101 21:16:21.750038 1976 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:21 ubuntu k3s[1976]: I1101 21:16:21.750065 1976 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:21 ubuntu k3s[1976]: I1101 21:16:21.785716 1976 master.go:233] Using reconciler: lease
Nov 01 21:16:22 ubuntu k3s[1976]: W1101 21:16:22.507799 1976 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:16:22 ubuntu k3s[1976]: W1101 21:16:22.529090 1976 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:22 ubuntu k3s[1976]: W1101 21:16:22.540509 1976 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:22 ubuntu k3s[1976]: W1101 21:16:22.543160 1976 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:22 ubuntu k3s[1976]: W1101 21:16:22.550587 1976 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744072 1976 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744192 1976 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744265 1976 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744325 1976 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744392 1976 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744450 1976 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744501 1976 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744553 1976 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744705 1976 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744811 1976 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744876 1976 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: E1101 21:16:24.744932 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:24 ubuntu k3s[1976]: I1101 21:16:24.744990 1976 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:24 ubuntu k3s[1976]: I1101 21:16:24.745022 1976 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:29 ubuntu k3s[1976]: time="2019-11-01T21:16:29.882448693Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:16:29 ubuntu k3s[1976]: time="2019-11-01T21:16:29.883441387Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.885293 1976 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.886678 1976 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.886820 1976 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.886838 1976 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.886917 1976 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.886934 1976 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.887867 1976 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.887895 1976 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.887963 1976 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903259 1976 controller.go:83] Starting OpenAPI controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903356 1976 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903405 1976 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903445 1976 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903485 1976 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903586 1976 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.903606 1976 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.915908 1976 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.916453 1976 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.929652 1976 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.931173 1976 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:16:29 ubuntu k3s[1976]: W1101 21:16:29.933261 1976 authorization.go:47] Authorization is disabled
Nov 01 21:16:29 ubuntu k3s[1976]: W1101 21:16:29.933652 1976 authentication.go:55] Authentication is disabled
Nov 01 21:16:29 ubuntu k3s[1976]: I1101 21:16:29.933874 1976 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.007496 1976 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.015323 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.016768 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.017426 1976 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.017973 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.039400 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.045206 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.045422 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.045627 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.045791 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.049010 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.051694429Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.064877521Z" level=info msg="Creating CRD addons.k3s.cattle.io"
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.074219966Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.077593 1976 controller.go:147] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
Nov 01 21:16:30 ubuntu k3s[1976]: E1101 21:16:30.085072 1976 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.0.37, ResourceVersion: 0, AdditionalErrorMsg:
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.090139 1976 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.090268 1976 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.090313 1976 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.095452515Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.606646932Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
Nov 01 21:16:30 ubuntu k3s[1976]: time="2019-11-01T21:16:30.607119020Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.879936 1976 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.880041 1976 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.880118 1976 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.900118 1976 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.911944 1976 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
Nov 01 21:16:30 ubuntu k3s[1976]: I1101 21:16:30.912009 1976 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.019809 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.023139 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.040254 1976 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.052122 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.055756 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.057643 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.064513 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.067400 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.073536 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.075974 1976 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.113945147Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.146350494Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.147254744Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.147613722Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.148052755Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.148549 1976 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.148735 1976 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.148837 1976 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.148925 1976 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.148986 1976 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.149042 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.149134 1976 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.187670253Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.188057564Z" level=info msg="Listening on :6443"
Nov 01 21:16:31 ubuntu k3s[1976]: I1101 21:16:31.224254 1976 controller.go:606] quota admission added evaluator for: listenerconfigs.k3s.cattle.io
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.245450 1976 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.245666 1976 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.247029 1976 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.248260 1976 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.249679 1976 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.250524 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.251426 1976 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.754251072Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:16:31 ubuntu k3s[1976]: I1101 21:16:31.776540 1976 controller.go:606] quota admission added evaluator for: serviceaccounts
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.854432028Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.854651600Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.856107938Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:16:31 ubuntu k3s[1976]: time="2019-11-01T21:16:31.856693358Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.856823 1976 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857314 1976 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857445 1976 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857568 1976 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857640 1976 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857698 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.857811 1976 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858232 1976 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858562 1976 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858705 1976 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858815 1976 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858874 1976 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.858922 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859029 1976 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859249 1976 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859308 1976 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859404 1976 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859513 1976 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859577 1976 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859628 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.859722 1976 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860106 1976 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860168 1976 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860296 1976 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860433 1976 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860492 1976 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860542 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860693 1976 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860935 1976 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.860989 1976 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861122 1976 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861238 1976 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861293 1976 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861339 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861435 1976 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861673 1976 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861723 1976 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861827 1976 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861928 1976 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.861981 1976 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.862036 1976 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: E1101 21:16:31.862124 1976 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:16:31 ubuntu k3s[1976]: I1101 21:16:31.922475 1976 controller.go:606] quota admission added evaluator for: deployments.extensions
Nov 01 21:16:32 ubuntu k3s[1976]: I1101 21:16:32.021854 1976 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:16:32 ubuntu k3s[1976]: I1101 21:16:32.212266 1976 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
Nov 01 21:16:32 ubuntu k3s[1976]: time="2019-11-01T21:16:32.329367105Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:16:32 ubuntu k3s[1976]: time="2019-11-01T21:16:32.329449494Z" level=info msg="Run: k3s kubectl"
Nov 01 21:16:32 ubuntu k3s[1976]: time="2019-11-01T21:16:32.329473493Z" level=info msg="k3s is up and running"
Nov 01 21:16:32 ubuntu k3s[1976]: time="2019-11-01T21:16:32.329816694Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:16:32 ubuntu k3s[1976]: time="2019-11-01T21:16:32.329887693Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:16:32 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:16:32 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:16:32 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:16:37 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:16:37 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1.
Nov 01 21:16:37 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:16:37 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:16:38 ubuntu k3s[2012]: time="2019-11-01T21:16:38.043617068Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:16:38 ubuntu k3s[2012]: time="2019-11-01T21:16:38.055258805Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:16:38 ubuntu k3s[2012]: time="2019-11-01T21:16:38.056711069Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:16:38 ubuntu k3s[2012]: time="2019-11-01T21:16:38.098097811Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.099452 2012 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.099989 2012 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.115306 2012 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.115377 2012 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.117667 2012 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.117776 2012 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.117856 2012 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.117936 2012 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118016 2012 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118072 2012 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118118 2012 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118164 2012 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118321 2012 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118432 2012 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118491 2012 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: E1101 21:16:38.118539 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.118593 2012 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.118620 2012 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:38 ubuntu k3s[2012]: I1101 21:16:38.158527 2012 master.go:233] Using reconciler: lease
Nov 01 21:16:38 ubuntu k3s[2012]: W1101 21:16:38.885134 2012 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:16:38 ubuntu k3s[2012]: W1101 21:16:38.907991 2012 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:38 ubuntu k3s[2012]: W1101 21:16:38.921114 2012 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:38 ubuntu k3s[2012]: W1101 21:16:38.923981 2012 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:38 ubuntu k3s[2012]: W1101 21:16:38.931393 2012 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.140764 2012 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.140884 2012 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.140953 2012 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141013 2012 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141078 2012 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141139 2012 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141192 2012 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141242 2012 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141362 2012 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141476 2012 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141539 2012 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: E1101 21:16:41.141592 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:41 ubuntu k3s[2012]: I1101 21:16:41.141652 2012 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:41 ubuntu k3s[2012]: I1101 21:16:41.141676 2012 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.317170098Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.319009617Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.320007 2012 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321558 2012 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321625 2012 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321723 2012 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321743 2012 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321785 2012 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321863 2012 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.321897 2012 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.322918 2012 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.323483 2012 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.323881 2012 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.324431 2012 controller.go:83] Starting OpenAPI controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.324921 2012 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.325305 2012 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.325660 2012 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.325998 2012 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.341051 2012 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.342257 2012 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.356806 2012 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.356935 2012 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:16:46 ubuntu k3s[2012]: W1101 21:16:46.359498 2012 authorization.go:47] Authorization is disabled
Nov 01 21:16:46 ubuntu k3s[2012]: W1101 21:16:46.359557 2012 authentication.go:55] Authentication is disabled
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.359584 2012 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.444730 2012 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.470258 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.470846 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.471480 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.472129 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.472761 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.473292 2012 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.473895 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.474620 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.475315 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.475898 2012 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.521862 2012 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:16:46 ubuntu k3s[2012]: W1101 21:16:46.522101 2012 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.0.37]
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.522919 2012 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.524535 2012 controller.go:606] quota admission added evaluator for: endpoints
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.530731885Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.532105242Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:16:46 ubuntu k3s[2012]: I1101 21:16:46.532742 2012 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.533588987Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.534443478Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.535373 2012 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.536078 2012 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.536580 2012 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.537175 2012 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.537648 2012 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.538093 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.538638 2012 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.554007 2012 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:16:46 ubuntu k3s[2012]: time="2019-11-01T21:16:46.557727918Z" level=info msg="Listening on :6443"
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558130 2012 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558208 2012 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558325 2012 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558433 2012 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558493 2012 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558546 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:46 ubuntu k3s[2012]: E1101 21:16:46.558659 2012 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: time="2019-11-01T21:16:47.060331479Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:16:47 ubuntu k3s[2012]: time="2019-11-01T21:16:47.160475984Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.160793 2012 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.160865 2012 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.160999 2012 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161135 2012 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161206 2012 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161256 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161358 2012 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161787 2012 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161848 2012 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.161944 2012 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162054 2012 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162130 2012 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162179 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162279 2012 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162489 2012 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162541 2012 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162647 2012 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162770 2012 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162841 2012 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.162895 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163053 2012 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163439 2012 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163507 2012 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163620 2012 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163737 2012 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163804 2012 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163862 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.163961 2012 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164196 2012 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164260 2012 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164364 2012 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164464 2012 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164523 2012 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164576 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164706 2012 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.164921 2012 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165028 2012 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165144 2012 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165257 2012 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165319 2012 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165374 2012 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: E1101 21:16:47.165472 2012 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:16:47 ubuntu k3s[2012]: I1101 21:16:47.185235 2012 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:16:47 ubuntu k3s[2012]: time="2019-11-01T21:16:47.258598083Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:16:47 ubuntu k3s[2012]: time="2019-11-01T21:16:47.259095912Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:16:47 ubuntu k3s[2012]: I1101 21:16:47.317996 2012 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:16:47 ubuntu k3s[2012]: I1101 21:16:47.318086 2012 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:16:47 ubuntu k3s[2012]: I1101 21:16:47.318124 2012 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:16:47 ubuntu k3s[2012]: I1101 21:16:47.385376 2012 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.171157042Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Nov 01 21:16:48 ubuntu k3s[2012]: I1101 21:16:48.184693 2012 controller.go:606] quota admission added evaluator for: serviceaccounts
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.686442726Z" level=info msg="Starting batch/v1, Kind=Job controller"
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.900566900Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.900692028Z" level=info msg="Run: k3s kubectl"
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.900720065Z" level=info msg="k3s is up and running"
Nov 01 21:16:48 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.903190782Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:16:48 ubuntu k3s[2012]: time="2019-11-01T21:16:48.903294707Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:16:48 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:16:48 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:16:53 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:16:53 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 2.
Nov 01 21:16:53 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:16:53 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:16:54 ubuntu k3s[2040]: time="2019-11-01T21:16:54.548974598Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:16:54 ubuntu k3s[2040]: time="2019-11-01T21:16:54.559691845Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:16:54 ubuntu k3s[2040]: time="2019-11-01T21:16:54.561406773Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:16:54 ubuntu k3s[2040]: time="2019-11-01T21:16:54.602563115Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.603867 2040 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.604437 2040 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.620251 2040 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.620324 2040 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.622746 2040 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.622860 2040 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.622941 2040 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623008 2040 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623081 2040 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623142 2040 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623193 2040 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623248 2040 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623457 2040 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623700 2040 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623769 2040 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: E1101 21:16:54.623844 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.623922 2040 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.623950 2040 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:16:54 ubuntu k3s[2040]: I1101 21:16:54.670594 2040 master.go:233] Using reconciler: lease
Nov 01 21:16:55 ubuntu k3s[2040]: W1101 21:16:55.429678 2040 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:16:55 ubuntu k3s[2040]: W1101 21:16:55.452468 2040 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:55 ubuntu k3s[2040]: W1101 21:16:55.465578 2040 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:55 ubuntu k3s[2040]: W1101 21:16:55.468397 2040 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:55 ubuntu k3s[2040]: W1101 21:16:55.475708 2040 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.696725 2040 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.696843 2040 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.696915 2040 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.696974 2040 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697051 2040 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697108 2040 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697158 2040 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697208 2040 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697368 2040 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697471 2040 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697527 2040 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: E1101 21:16:57.697586 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:16:57 ubuntu k3s[2040]: I1101 21:16:57.697646 2040 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:16:57 ubuntu k3s[2040]: I1101 21:16:57.697672 2040 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:02 ubuntu k3s[2040]: time="2019-11-01T21:17:02.799553762Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:17:02 ubuntu k3s[2040]: time="2019-11-01T21:17:02.800671085Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.800358 2040 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.821018 2040 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.821083 2040 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.821884 2040 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.823141 2040 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.823206 2040 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.823277 2040 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.825780 2040 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.825876 2040 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.831765 2040 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.831851 2040 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.832193 2040 controller.go:83] Starting OpenAPI controller
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.832415 2040 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.832834 2040 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.833125 2040 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.833512 2040 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.888292 2040 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.888410 2040 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:17:02 ubuntu k3s[2040]: W1101 21:17:02.891017 2040 authorization.go:47] Authorization is disabled
Nov 01 21:17:02 ubuntu k3s[2040]: W1101 21:17:02.891078 2040 authentication.go:55] Authentication is disabled
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.891105 2040 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.896059 2040 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:02 ubuntu k3s[2040]: I1101 21:17:02.897179 2040 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.992852 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.994286 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.995329 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.996303 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.997591 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:17:02 ubuntu k3s[2040]: E1101 21:17:02.998658 2040 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.021387 2040 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.028540 2040 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.030442 2040 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.035765 2040 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.045185540Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.046268400Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.046927838Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.047485648Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048198 2040 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048305 2040 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048455 2040 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048596 2040 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048900 2040 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.048963 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.049077 2040 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.074326 2040 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.076249574Z" level=info msg="Listening on :6443"
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.077125 2040 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.077604 2040 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.077774 2040 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.077886 2040 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.077954 2040 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.078009 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.078115 2040 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.579685552Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.679835864Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680146 2040 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680220 2040 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680343 2040 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680455 2040 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680525 2040 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680575 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.680721 2040 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681153 2040 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.681185999Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681239 2040 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.681252110Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681342 2040 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681449 2040 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681512 2040 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681562 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681670 2040 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681903 2040 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.681961 2040 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682070 2040 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682187 2040 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682256 2040 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682309 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682429 2040 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682820 2040 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.682884 2040 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683031 2040 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683150 2040 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683205 2040 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683261 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683364 2040 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683608 2040 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683665 2040 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683765 2040 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683864 2040 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683916 2040 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.683961 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684052 2040 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684275 2040 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684351 2040 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684456 2040 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684565 2040 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684658 2040 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684719 2040 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: E1101 21:17:03.684815 2040 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.698997 2040 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.784248 2040 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.784312 2040 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:17:03 ubuntu k3s[2040]: I1101 21:17:03.839359 2040 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.850697901Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.851389765Z" level=info msg="Run: k3s kubectl"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.851884353Z" level=info msg="k3s is up and running"
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.852755363Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:03 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:17:03 ubuntu k3s[2040]: time="2019-11-01T21:17:03.854354551Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:03 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:17:03 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:17:08 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:17:08 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 3.
Nov 01 21:17:08 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:17:08 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:17:09 ubuntu k3s[2076]: time="2019-11-01T21:17:09.549836602Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:17:09 ubuntu k3s[2076]: time="2019-11-01T21:17:09.560802071Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:17:09 ubuntu k3s[2076]: time="2019-11-01T21:17:09.562155280Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:17:09 ubuntu k3s[2076]: time="2019-11-01T21:17:09.603965619Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.605307 2076 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.605873 2076 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.620890 2076 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.620964 2076 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623392 2076 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623490 2076 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623556 2076 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623617 2076 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623684 2076 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623739 2076 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623791 2076 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.623842 2076 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.624044 2076 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.624202 2076 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.624268 2076 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: E1101 21:17:09.624326 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.624387 2076 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.624413 2076 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:09 ubuntu k3s[2076]: I1101 21:17:09.667423 2076 master.go:233] Using reconciler: lease
Nov 01 21:17:10 ubuntu k3s[2076]: W1101 21:17:10.415823 2076 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:17:10 ubuntu k3s[2076]: W1101 21:17:10.438933 2076 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:10 ubuntu k3s[2076]: W1101 21:17:10.451923 2076 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:10 ubuntu k3s[2076]: W1101 21:17:10.454764 2076 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:10 ubuntu k3s[2076]: W1101 21:17:10.462265 2076 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664002 2076 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664125 2076 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664201 2076 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664265 2076 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664342 2076 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664401 2076 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664458 2076 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664509 2076 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664608 2076 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664740 2076 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664805 2076 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: E1101 21:17:12.664858 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:12 ubuntu k3s[2076]: I1101 21:17:12.664916 2076 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:12 ubuntu k3s[2076]: I1101 21:17:12.664940 2076 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.728293207Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.729826396Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.731114 2076 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.731251 2076 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.732250 2076 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.732305 2076 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.733516 2076 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.733572 2076 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.733654 2076 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.733673 2076 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.734814 2076 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.735780 2076 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.735837 2076 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.748659 2076 controller.go:83] Starting OpenAPI controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.748745 2076 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.748788 2076 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.748827 2076 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.748899 2076 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.765508 2076 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.766667 2076 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.768116 2076 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.768220 2076 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:17:17 ubuntu k3s[2076]: W1101 21:17:17.774603 2076 authorization.go:47] Authorization is disabled
Nov 01 21:17:17 ubuntu k3s[2076]: W1101 21:17:17.775207 2076 authentication.go:55] Authentication is disabled
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.775573 2076 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.843895 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.844911 2076 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.845598 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.846119 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.846538 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.847060 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.847678 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.848167 2076 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.848608 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.855028 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.855902 2076 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.897215128Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.898280322Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.898764410Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.899111647Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.899606 2076 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.899685 2076 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.899805 2076 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.899972 2076 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.900036 2076 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.900094 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.900199 2076 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: time="2019-11-01T21:17:17.928813289Z" level=info msg="Listening on :6443"
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929125 2076 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929186 2076 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929297 2076 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929378 2076 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929435 2076 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929491 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.929579 2076 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.935506 2076 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.936311 2076 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:17:17 ubuntu k3s[2076]: I1101 21:17:17.937453 2076 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:17:17 ubuntu k3s[2076]: E1101 21:17:17.945191 2076 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.431461280Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.531631453Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.532984347Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.533069476Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.531925 2076 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533226 2076 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533381 2076 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533492 2076 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533553 2076 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533603 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.533726 2076 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534108 2076 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534165 2076 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534294 2076 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534387 2076 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534443 2076 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534489 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534584 2076 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534787 2076 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534834 2076 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.534932 2076 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535021 2076 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535077 2076 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535129 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535225 2076 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535601 2076 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535667 2076 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.535785 2076 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.536800 2076 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.536903 2076 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.536974 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537094 2076 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: I1101 21:17:18.537128 2076 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537346 2076 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537417 2076 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537558 2076 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537672 2076 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537742 2076 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537793 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.537905 2076 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538183 2076 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538238 2076 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538373 2076 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538468 2076 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538527 2076 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538574 2076 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: E1101 21:17:18.538675 2076 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.694451230Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.694527895Z" level=info msg="Run: k3s kubectl"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.694552154Z" level=info msg="k3s is up and running"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.694916151Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:18 ubuntu k3s[2076]: time="2019-11-01T21:17:18.694993058Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:18 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:17:18 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:17:18 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:17:23 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:17:23 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 4.
Nov 01 21:17:23 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:17:23 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:17:24 ubuntu k3s[2105]: time="2019-11-01T21:17:24.550016410Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:17:24 ubuntu k3s[2105]: time="2019-11-01T21:17:24.561055841Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:17:24 ubuntu k3s[2105]: time="2019-11-01T21:17:24.562537642Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:17:24 ubuntu k3s[2105]: time="2019-11-01T21:17:24.603673639Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.605031 2105 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.605589 2105 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.620809 2105 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.620882 2105 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623139 2105 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623235 2105 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623307 2105 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623375 2105 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623447 2105 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623498 2105 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623550 2105 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623603 2105 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623847 2105 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.623950 2105 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.624016 2105 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: E1101 21:17:24.624072 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.624135 2105 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.624159 2105 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:24 ubuntu k3s[2105]: I1101 21:17:24.665683 2105 master.go:233] Using reconciler: lease
Nov 01 21:17:25 ubuntu k3s[2105]: W1101 21:17:25.393904 2105 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:17:25 ubuntu k3s[2105]: W1101 21:17:25.416989 2105 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:25 ubuntu k3s[2105]: W1101 21:17:25.430105 2105 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:25 ubuntu k3s[2105]: W1101 21:17:25.433055 2105 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:25 ubuntu k3s[2105]: W1101 21:17:25.440535 2105 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.649621 2105 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.649744 2105 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.649816 2105 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.649875 2105 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.649941 2105 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650001 2105 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650051 2105 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650101 2105 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650216 2105 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650327 2105 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650389 2105 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: E1101 21:17:27.650444 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:27 ubuntu k3s[2105]: I1101 21:17:27.650503 2105 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:27 ubuntu k3s[2105]: I1101 21:17:27.650526 2105 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.722587234Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.724462809Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.725470 2105 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.725883 2105 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.726031 2105 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.726054 2105 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.726175 2105 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.726256 2105 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.726274 2105 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.727614 2105 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.727678 2105 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730456 2105 controller.go:83] Starting OpenAPI controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730556 2105 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730620 2105 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730690 2105 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730733 2105 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730804 2105 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.730825 2105 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.747158 2105 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.747272 2105 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:17:32 ubuntu k3s[2105]: W1101 21:17:32.751477 2105 authorization.go:47] Authorization is disabled
Nov 01 21:17:32 ubuntu k3s[2105]: W1101 21:17:32.751602 2105 authentication.go:55] Authentication is disabled
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.751632 2105 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.769809 2105 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.771148 2105 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.831046 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.838361 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.839696 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.842074 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.842276 2105 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.845728 2105 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.845984 2105 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.846553 2105 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.856933 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.857113 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.877239 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.878242 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.879051 2105 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
Nov 01 21:17:32 ubuntu k3s[2105]: I1101 21:17:32.926276 2105 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.928134 2105 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.941021384Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.942064171Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.942668128Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.943156290Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.943762 2105 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.943848 2105 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.943960 2105 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.944066 2105 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.944125 2105 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.944180 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.944303 2105 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: time="2019-11-01T21:17:32.965879371Z" level=info msg="Listening on :6443"
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966265 2105 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966347 2105 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966473 2105 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966608 2105 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966679 2105 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966781 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:32 ubuntu k3s[2105]: E1101 21:17:32.966894 2105 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.468666343Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.568796153Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569073 2105 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569138 2105 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569238 2105 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569338 2105 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569395 2105 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569444 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569533 2105 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.569991 2105 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570059 2105 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570161 2105 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570265 2105 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570321 2105 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570387 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570498 2105 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570759 2105 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570834 2105 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.570943 2105 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571060 2105 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571126 2105 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571183 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571301 2105 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: I1101 21:17:33.571414 2105 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571669 2105 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571737 2105 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571841 2105 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.571941 2105 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572004 2105 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572060 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572153 2105 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572363 2105 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572422 2105 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572511 2105 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572605 2105 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572715 2105 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572769 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.572858 2105 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573096 2105 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573152 2105 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573243 2105 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573343 2105 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573400 2105 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573459 2105 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: E1101 21:17:33.573542 2105 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.577734902Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.578439265Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:17:33 ubuntu k3s[2105]: I1101 21:17:33.719849 2105 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:17:33 ubuntu k3s[2105]: I1101 21:17:33.719923 2105 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:17:33 ubuntu k3s[2105]: I1101 21:17:33.719961 2105 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:17:33 ubuntu k3s[2105]: I1101 21:17:33.753998 2105 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.838494892Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.838591558Z" level=info msg="Run: k3s kubectl"
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.838620631Z" level=info msg="k3s is up and running"
Nov 01 21:17:33 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.841672066Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:33 ubuntu k3s[2105]: time="2019-11-01T21:17:33.841796564Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:33 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:17:33 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:17:38 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:17:38 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 5.
Nov 01 21:17:38 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:17:38 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:17:39 ubuntu k3s[2127]: time="2019-11-01T21:17:39.543632321Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:17:39 ubuntu k3s[2127]: time="2019-11-01T21:17:39.554961270Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:17:39 ubuntu k3s[2127]: time="2019-11-01T21:17:39.556261424Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:17:39 ubuntu k3s[2127]: time="2019-11-01T21:17:39.597668273Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.598946 2127 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.599480 2127 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.614502 2127 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.614572 2127 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617026 2127 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617123 2127 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617204 2127 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617265 2127 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617338 2127 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617419 2127 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617474 2127 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617529 2127 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617682 2127 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617757 2127 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617821 2127 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: E1101 21:17:39.617876 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.617934 2127 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.617960 2127 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:39 ubuntu k3s[2127]: I1101 21:17:39.660439 2127 master.go:233] Using reconciler: lease
Nov 01 21:17:40 ubuntu k3s[2127]: W1101 21:17:40.394098 2127 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:17:40 ubuntu k3s[2127]: W1101 21:17:40.417031 2127 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:40 ubuntu k3s[2127]: W1101 21:17:40.430373 2127 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:40 ubuntu k3s[2127]: W1101 21:17:40.433228 2127 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:40 ubuntu k3s[2127]: W1101 21:17:40.440661 2127 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.652663 2127 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.652833 2127 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.652909 2127 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.652970 2127 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653035 2127 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653096 2127 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653148 2127 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653199 2127 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653320 2127 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653424 2127 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653495 2127 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: E1101 21:17:42.653548 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:42 ubuntu k3s[2127]: I1101 21:17:42.653615 2127 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:42 ubuntu k3s[2127]: I1101 21:17:42.653643 2127 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.705543912Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.707370914Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.708478 2127 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.708668 2127 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.708906 2127 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709086 2127 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709120 2127 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709257 2127 controller.go:83] Starting OpenAPI controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709311 2127 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709363 2127 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709413 2127 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709451 2127 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709509 2127 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.709529 2127 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.710641 2127 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.710691 2127 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.710747 2127 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.710765 2127 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.743887 2127 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.753148 2127 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.773431 2127 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.774272 2127 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:17:47 ubuntu k3s[2127]: W1101 21:17:47.777046 2127 authorization.go:47] Authorization is disabled
Nov 01 21:17:47 ubuntu k3s[2127]: W1101 21:17:47.777696 2127 authentication.go:55] Authentication is disabled
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.778072 2127 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.844670 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.845141 2127 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.845895 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.861777 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.862564 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.863277 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.863929 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.865667 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.866695 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.876295 2127 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.898841 2127 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.909275 2127 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.909810 2127 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.910888422Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.912161040Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.912885867Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.913465065Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.914675 2127 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.914850 2127 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.914920 2127 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.915052 2127 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.915550 2127 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.915643 2127 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: I1101 21:17:47.915678 2127 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.915700 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.915810 2127 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: time="2019-11-01T21:17:47.943475634Z" level=info msg="Listening on :6443"
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.943795 2127 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.943868 2127 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.944036 2127 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.944234 2127 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.944301 2127 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.944353 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:47 ubuntu k3s[2127]: E1101 21:17:47.944496 2127 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.454203047Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.554312975Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.554599 2127 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.554672 2127 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.554819 2127 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.554928 2127 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.554988 2127 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.555037 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.555177 2127 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.555560463Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.555630555Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.555597 2127 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.555815 2127 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556036 2127 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556240 2127 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556336 2127 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556403 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556513 2127 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556840 2127 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.556921 2127 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557065 2127 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557192 2127 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557262 2127 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557325 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557463 2127 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557886 2127 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.557954 2127 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558077 2127 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558191 2127 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558247 2127 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558297 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558401 2127 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558634 2127 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558683 2127 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558818 2127 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558931 2127 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.558988 2127 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559041 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559190 2127 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559419 2127 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559475 2127 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559586 2127 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559706 2127 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559767 2127 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559815 2127 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: E1101 21:17:48.559926 2127 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:17:48 ubuntu k3s[2127]: I1101 21:17:48.572166 2127 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:17:48 ubuntu k3s[2127]: I1101 21:17:48.702722 2127 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:17:48 ubuntu k3s[2127]: I1101 21:17:48.702782 2127 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.729101829Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.729193273Z" level=info msg="Run: k3s kubectl"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.729220143Z" level=info msg="k3s is up and running"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.729579547Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:48 ubuntu k3s[2127]: time="2019-11-01T21:17:48.729646528Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:17:48 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:17:48 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:17:48 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:17:53 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:17:53 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 6.
Nov 01 21:17:53 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:17:53 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:17:54 ubuntu k3s[2166]: time="2019-11-01T21:17:54.563447177Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:17:54 ubuntu k3s[2166]: time="2019-11-01T21:17:54.574663091Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:17:54 ubuntu k3s[2166]: time="2019-11-01T21:17:54.576549166Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:17:54 ubuntu k3s[2166]: time="2019-11-01T21:17:54.618842676Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.620212 2166 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.620899 2166 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.636534 2166 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.636608 2166 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639143 2166 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639277 2166 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639370 2166 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639437 2166 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639508 2166 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639568 2166 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639620 2166 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639672 2166 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.639943 2166 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.640153 2166 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.640224 2166 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: E1101 21:17:54.640279 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.640342 2166 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.640368 2166 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:17:54 ubuntu k3s[2166]: I1101 21:17:54.688155 2166 master.go:233] Using reconciler: lease
Nov 01 21:17:55 ubuntu k3s[2166]: W1101 21:17:55.577099 2166 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:17:55 ubuntu k3s[2166]: W1101 21:17:55.610657 2166 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:55 ubuntu k3s[2166]: W1101 21:17:55.629902 2166 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:55 ubuntu k3s[2166]: W1101 21:17:55.634522 2166 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:55 ubuntu k3s[2166]: W1101 21:17:55.645844 2166 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996063 2166 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996181 2166 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996250 2166 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996309 2166 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996378 2166 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996434 2166 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996486 2166 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996547 2166 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996705 2166 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996807 2166 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996869 2166 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: E1101 21:17:57.996922 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:17:57 ubuntu k3s[2166]: I1101 21:17:57.996980 2166 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:17:57 ubuntu k3s[2166]: I1101 21:17:57.997005 2166 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.127438164Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.129772716Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.130188 2166 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.134423 2166 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.135988 2166 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.136545 2166 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.137094 2166 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.137878 2166 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.138351 2166 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.138836 2166 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.139259 2166 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.142716 2166 controller.go:83] Starting OpenAPI controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.143361 2166 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.143836 2166 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.144224 2166 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.144613 2166 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.145191 2166 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.145604 2166 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.160532 2166 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.160707 2166 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.167636 2166 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.169092 2166 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:18:03 ubuntu k3s[2166]: W1101 21:18:03.171069 2166 authorization.go:47] Authorization is disabled
Nov 01 21:18:03 ubuntu k3s[2166]: W1101 21:18:03.171170 2166 authentication.go:55] Authentication is disabled
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.171200 2166 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.245109 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.277554 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.278054 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.278365 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.278765 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.278973 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.279154 2166 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.279288 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.280805 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.295223 2166 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.322462 2166 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.328805854Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.329772271Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.330637504Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.331442403Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332085 2166 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332169 2166 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332316 2166 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332519 2166 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332589 2166 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332704 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.332829 2166 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.337244 2166 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.339207 2166 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.339846 2166 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:18:03 ubuntu k3s[2166]: I1101 21:18:03.350007 2166 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.355741491Z" level=info msg="Listening on :6443"
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356077 2166 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356149 2166 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356270 2166 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356450 2166 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356516 2166 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356571 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.356743 2166 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.859289021Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.961375510Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.961564953Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:18:03 ubuntu k3s[2166]: time="2019-11-01T21:18:03.962375945Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.963282 2166 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.963422 2166 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.963938 2166 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.964375 2166 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.964503 2166 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.964732 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.965016 2166 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.965909 2166 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.966024 2166 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.966442 2166 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.967051 2166 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.967170 2166 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.967350 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.967701 2166 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.968226 2166 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.968353 2166 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.968875 2166 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.969323 2166 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.969455 2166 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.969643 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.970087 2166 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.970917 2166 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.971057 2166 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.971581 2166 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.971966 2166 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.972094 2166 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.972284 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.972566 2166 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.973174 2166 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.973814 2166 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.976926 2166 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.979141 2166 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.979298 2166 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.979660 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.980437 2166 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.981849 2166 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.982021 2166 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.984284 2166 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.986617 2166 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.986838 2166 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.987191 2166 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:03 ubuntu k3s[2166]: E1101 21:18:03.987729 2166 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:18:04 ubuntu k3s[2166]: I1101 21:18:04.093729 2166 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:18:04 ubuntu k3s[2166]: I1101 21:18:04.124325 2166 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:18:04 ubuntu k3s[2166]: I1101 21:18:04.124390 2166 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:18:04 ubuntu k3s[2166]: I1101 21:18:04.124428 2166 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:18:04 ubuntu k3s[2166]: I1101 21:18:04.153970 2166 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:18:04 ubuntu k3s[2166]: time="2019-11-01T21:18:04.161384121Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:18:04 ubuntu k3s[2166]: time="2019-11-01T21:18:04.161460361Z" level=info msg="Run: k3s kubectl"
Nov 01 21:18:04 ubuntu k3s[2166]: time="2019-11-01T21:18:04.161484102Z" level=info msg="k3s is up and running"
Nov 01 21:18:04 ubuntu k3s[2166]: time="2019-11-01T21:18:04.161874561Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:04 ubuntu k3s[2166]: time="2019-11-01T21:18:04.161960912Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:04 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:18:04 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:18:04 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:18:09 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:18:09 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 7.
Nov 01 21:18:09 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:18:09 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:18:09 ubuntu k3s[2292]: time="2019-11-01T21:18:09.793034915Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:18:09 ubuntu k3s[2292]: time="2019-11-01T21:18:09.804428920Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:18:09 ubuntu k3s[2292]: time="2019-11-01T21:18:09.805813259Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:18:09 ubuntu k3s[2292]: time="2019-11-01T21:18:09.847413056Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.848895 2292 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.849444 2292 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.864254 2292 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.864329 2292 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.866670 2292 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.866770 2292 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.866835 2292 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.866896 2292 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.866963 2292 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867021 2292 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867072 2292 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867121 2292 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867225 2292 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867307 2292 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867371 2292 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: E1101 21:18:09.867429 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.867489 2292 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.867515 2292 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:09 ubuntu k3s[2292]: I1101 21:18:09.909907 2292 master.go:233] Using reconciler: lease
Nov 01 21:18:10 ubuntu k3s[2292]: W1101 21:18:10.635569 2292 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:18:10 ubuntu k3s[2292]: W1101 21:18:10.658483 2292 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:10 ubuntu k3s[2292]: W1101 21:18:10.671742 2292 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:10 ubuntu k3s[2292]: W1101 21:18:10.674595 2292 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:10 ubuntu k3s[2292]: W1101 21:18:10.682135 2292 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879090 2292 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879211 2292 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879283 2292 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879352 2292 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879418 2292 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879477 2292 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879527 2292 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879577 2292 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879684 2292 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879784 2292 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879847 2292 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: E1101 21:18:12.879903 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:12 ubuntu k3s[2292]: I1101 21:18:12.880009 2292 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:12 ubuntu k3s[2292]: I1101 21:18:12.880036 2292 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:17 ubuntu k3s[2292]: time="2019-11-01T21:18:17.951849634Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:18:17 ubuntu k3s[2292]: time="2019-11-01T21:18:17.954066872Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.954851 2292 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.955207 2292 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.958151 2292 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.958291 2292 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.958313 2292 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.959363 2292 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.959413 2292 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.959458 2292 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.959474 2292 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.960098 2292 controller.go:83] Starting OpenAPI controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.960170 2292 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.960224 2292 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.960266 2292 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.960305 2292 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.972593 2292 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.972687 2292 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.986185 2292 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:18:17 ubuntu k3s[2292]: I1101 21:18:17.986332 2292 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:18:18 ubuntu k3s[2292]: W1101 21:18:18.000720 2292 authorization.go:47] Authorization is disabled
Nov 01 21:18:18 ubuntu k3s[2292]: W1101 21:18:18.000773 2292 authentication.go:55] Authentication is disabled
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.000802 2292 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.007005 2292 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.008275 2292 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.064591 2292 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.065871 2292 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.076994 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077206 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077398 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077552 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077731 2292 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077851 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.077965 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.082179 2292 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.099017 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.101018 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.101218 2292 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.159775 2292 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.164499 2292 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.177818957Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.178762523Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.179305406Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.179782217Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.180403 2292 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.180491 2292 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.180850 2292 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.180984 2292 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.181042 2292 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.181096 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.181212 2292 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.209473015Z" level=info msg="Listening on :6443"
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.209848 2292 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.209928 2292 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.210080 2292 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.210183 2292 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.210244 2292 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.210297 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.210435 2292 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.711938822Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.811228 2292 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.812285500Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.813858 2292 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.814072 2292 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.814319 2292 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.814620 2292 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.814766 2292 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.814886 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.815023308Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.815139418Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.815098 2292 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.816603 2292 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.816820 2292 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.817060 2292 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.817254 2292 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.817390 2292 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.817504 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.817716 2292 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.818183 2292 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.818310 2292 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.818576 2292 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.818777 2292 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.818914 2292 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.819031 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.819245 2292 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.819877 2292 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.819999 2292 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.820229 2292 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.820418 2292 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.820550 2292 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.820728 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.820940 2292 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821385 2292 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821496 2292 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821660 2292 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821786 2292 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821866 2292 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.821930 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822050 2292 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822283 2292 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822337 2292 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822460 2292 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822553 2292 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822611 2292 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822660 2292 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: E1101 21:18:18.822752 2292 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.948791 2292 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.948879 2292 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.948916 2292 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:18:18 ubuntu k3s[2292]: I1101 21:18:18.971091 2292 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.990056735Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.990135420Z" level=info msg="Run: k3s kubectl"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.990160345Z" level=info msg="k3s is up and running"
Nov 01 21:18:18 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.992051495Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:18 ubuntu k3s[2292]: time="2019-11-01T21:18:18.992149012Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:19 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:18:19 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:18:24 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:18:24 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 8.
Nov 01 21:18:24 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:18:24 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:18:24 ubuntu k3s[2332]: time="2019-11-01T21:18:24.798736232Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:18:24 ubuntu k3s[2332]: time="2019-11-01T21:18:24.810506363Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:18:24 ubuntu k3s[2332]: time="2019-11-01T21:18:24.811958072Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:18:24 ubuntu k3s[2332]: time="2019-11-01T21:18:24.853161171Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.854503 2332 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.855047 2332 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.870181 2332 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.870248 2332 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.872761 2332 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.872858 2332 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.872927 2332 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.872986 2332 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873063 2332 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873125 2332 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873176 2332 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873228 2332 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873443 2332 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873599 2332 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873664 2332 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: E1101 21:18:24.873720 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.873799 2332 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.873826 2332 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:24 ubuntu k3s[2332]: I1101 21:18:24.917638 2332 master.go:233] Using reconciler: lease
Nov 01 21:18:25 ubuntu k3s[2332]: W1101 21:18:25.681275 2332 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:18:25 ubuntu k3s[2332]: W1101 21:18:25.704499 2332 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:25 ubuntu k3s[2332]: W1101 21:18:25.717563 2332 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:25 ubuntu k3s[2332]: W1101 21:18:25.720411 2332 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:25 ubuntu k3s[2332]: W1101 21:18:25.727921 2332 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961578 2332 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961695 2332 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961768 2332 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961828 2332 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961904 2332 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.961962 2332 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962020 2332 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962072 2332 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962176 2332 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962275 2332 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962340 2332 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: E1101 21:18:27.962396 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:27 ubuntu k3s[2332]: I1101 21:18:27.962457 2332 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:27 ubuntu k3s[2332]: I1101 21:18:27.962480 2332 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.075404612Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.076461603Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.078332 2332 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.079889 2332 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.080105 2332 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.080270 2332 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.080376 2332 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.080397 2332 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.080723 2332 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.081111 2332 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.081145 2332 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.102892 2332 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.102952 2332 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.103025 2332 controller.go:83] Starting OpenAPI controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.103070 2332 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.103128 2332 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.103205 2332 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.103252 2332 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.107815 2332 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.109233 2332 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.126848 2332 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.127014 2332 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:18:33 ubuntu k3s[2332]: W1101 21:18:33.131149 2332 authorization.go:47] Authorization is disabled
Nov 01 21:18:33 ubuntu k3s[2332]: W1101 21:18:33.131218 2332 authentication.go:55] Authentication is disabled
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.131247 2332 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.214234 2332 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.241587 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.241775 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.241917 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.242041 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.242197 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.250960 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.258093 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.258328 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.258485 2332 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.258613 2332 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.280523 2332 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.282741 2332 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.294814 2332 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.282741 2332 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.296803230Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.299489501Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.300971303Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.302876470Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.304558 2332 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.305396 2332 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.306215 2332 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.307367 2332 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.308146 2332 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.309231 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.309511 2332 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.327677667Z" level=info msg="Listening on :6443"
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328058 2332 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328138 2332 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328285 2332 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328502 2332 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328580 2332 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328670 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.328811 2332 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.831236969Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.931361839Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.931715 2332 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.932403 2332 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.932680 2332 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.932812 2332 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.932873 2332 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.932887954Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:18:33 ubuntu k3s[2332]: time="2019-11-01T21:18:33.932951935Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.932926 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.933644 2332 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934045 2332 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934158 2332 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934267 2332 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934374 2332 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934434 2332 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934485 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934597 2332 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934826 2332 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.934880 2332 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935024 2332 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935144 2332 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935208 2332 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935258 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935355 2332 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935728 2332 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935807 2332 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.935940 2332 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936063 2332 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936122 2332 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936171 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936281 2332 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936537 2332 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936607 2332 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936774 2332 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936897 2332 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.936961 2332 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937009 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937126 2332 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937378 2332 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: I1101 21:18:33.937417 2332 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937439 2332 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937555 2332 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937667 2332 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937724 2332 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937775 2332 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:33 ubuntu k3s[2332]: E1101 21:18:33.937890 2332 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:18:34 ubuntu k3s[2332]: I1101 21:18:34.072528 2332 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:18:34 ubuntu k3s[2332]: I1101 21:18:34.072593 2332 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:18:34 ubuntu k3s[2332]: I1101 21:18:34.072689 2332 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:18:34 ubuntu k3s[2332]: I1101 21:18:34.093790 2332 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:18:34 ubuntu k3s[2332]: time="2019-11-01T21:18:34.100734753Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:18:34 ubuntu k3s[2332]: time="2019-11-01T21:18:34.100825364Z" level=info msg="Run: k3s kubectl"
Nov 01 21:18:34 ubuntu k3s[2332]: time="2019-11-01T21:18:34.100851049Z" level=info msg="k3s is up and running"
Nov 01 21:18:34 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:18:34 ubuntu k3s[2332]: time="2019-11-01T21:18:34.104155722Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:34 ubuntu k3s[2332]: time="2019-11-01T21:18:34.104250776Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:34 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:18:34 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:18:39 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:18:39 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 9.
Nov 01 21:18:39 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:18:39 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:18:39 ubuntu k3s[2422]: time="2019-11-01T21:18:39.795447350Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:18:39 ubuntu k3s[2422]: time="2019-11-01T21:18:39.807027261Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:18:39 ubuntu k3s[2422]: time="2019-11-01T21:18:39.808365008Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:18:39 ubuntu k3s[2422]: time="2019-11-01T21:18:39.849780681Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.851083 2422 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.851621 2422 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.866502 2422 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.866570 2422 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.868932 2422 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869029 2422 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869093 2422 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869199 2422 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869284 2422 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869345 2422 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869433 2422 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869501 2422 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869583 2422 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869666 2422 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869750 2422 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: E1101 21:18:39.869817 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.869894 2422 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.869930 2422 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:39 ubuntu k3s[2422]: I1101 21:18:39.912129 2422 master.go:233] Using reconciler: lease
Nov 01 21:18:40 ubuntu k3s[2422]: W1101 21:18:40.604106 2422 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:18:40 ubuntu k3s[2422]: W1101 21:18:40.627097 2422 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:40 ubuntu k3s[2422]: W1101 21:18:40.640135 2422 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:40 ubuntu k3s[2422]: W1101 21:18:40.642987 2422 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:40 ubuntu k3s[2422]: W1101 21:18:40.650548 2422 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846307 2422 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846429 2422 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846512 2422 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846571 2422 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846636 2422 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846694 2422 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846745 2422 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846796 2422 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.846910 2422 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.847009 2422 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.847073 2422 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: E1101 21:18:42.847128 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:42 ubuntu k3s[2422]: I1101 21:18:42.847190 2422 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:42 ubuntu k3s[2422]: I1101 21:18:42.847215 2422 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:47 ubuntu k3s[2422]: time="2019-11-01T21:18:47.877280351Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:18:47 ubuntu k3s[2422]: time="2019-11-01T21:18:47.878973057Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.880149 2422 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.881822 2422 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882622 2422 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882728 2422 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882749 2422 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882850 2422 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882869 2422 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.882972 2422 controller.go:83] Starting OpenAPI controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883016 2422 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883057 2422 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883097 2422 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883136 2422 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883201 2422 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.883230 2422 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.892878 2422 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.892941 2422 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.900005 2422 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.901366 2422 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.917548 2422 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.917695 2422 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:18:47 ubuntu k3s[2422]: W1101 21:18:47.922570 2422 authorization.go:47] Authorization is disabled
Nov 01 21:18:47 ubuntu k3s[2422]: W1101 21:18:47.923026 2422 authentication.go:55] Authentication is disabled
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.923320 2422 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:18:47 ubuntu k3s[2422]: I1101 21:18:47.991229 2422 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.084810 2422 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.086104 2422 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.092019 2422 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.102478887Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.103334 2422 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.103687635Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.104257000Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.104835050Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.105490 2422 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.105586 2422 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.105749 2422 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.105967 2422 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.106036 2422 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.106087 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.106200 2422 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.127692062Z" level=info msg="Listening on :6443"
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128205 2422 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128280 2422 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128420 2422 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128529 2422 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128596 2422 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128686 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.128825 2422 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.630574688Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.730841413Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.731790 2422 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.731975 2422 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.732313 2422 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.732743 2422 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.732928 2422 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.733175 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.733427 2422 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.734287 2422 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.734438 2422 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.734674 2422 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.734940 2422 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.735093 2422 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.735236 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.735556 2422 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.736131 2422 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.736271 2422 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.736502 2422 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.736827 2422 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.736988 2422 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.737133 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.737348 2422 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.738177 2422 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.738320 2422 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.738554 2422 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.738813 2422 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.738969 2422 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.739100 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.739306 2422 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.739830 2422 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.739966 2422 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.740200 2422 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.740406 2422 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.740552 2422 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.740737 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.740974 2422 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.741414 2422 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.741527 2422 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.741742 2422 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.741923 2422 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.742843 2422 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.743005 2422 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: E1101 21:18:48.743227 2422 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.744000570Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.744127068Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.874801 2422 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.874877 2422 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.879407 2422 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:18:48 ubuntu k3s[2422]: I1101 21:18:48.892305 2422 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.913181455Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.913270695Z" level=info msg="Run: k3s kubectl"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.913296899Z" level=info msg="k3s is up and running"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.913896115Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:48 ubuntu k3s[2422]: time="2019-11-01T21:18:48.913980726Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:18:48 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:18:48 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:18:48 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:18:53 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:18:53 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 10.
Nov 01 21:18:53 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:18:53 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:18:54 ubuntu k3s[2444]: time="2019-11-01T21:18:54.550196499Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:18:54 ubuntu k3s[2444]: time="2019-11-01T21:18:54.561752393Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:18:54 ubuntu k3s[2444]: time="2019-11-01T21:18:54.563210361Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:18:54 ubuntu k3s[2444]: time="2019-11-01T21:18:54.605038643Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.606323 2444 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.606867 2444 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.621681 2444 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.621750 2444 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624113 2444 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624210 2444 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624277 2444 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624337 2444 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624434 2444 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624497 2444 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624550 2444 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624673 2444 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624776 2444 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624863 2444 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624941 2444 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: E1101 21:18:54.624999 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.625059 2444 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.625086 2444 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:18:54 ubuntu k3s[2444]: I1101 21:18:54.667020 2444 master.go:233] Using reconciler: lease
Nov 01 21:18:55 ubuntu k3s[2444]: W1101 21:18:55.406482 2444 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:18:55 ubuntu k3s[2444]: W1101 21:18:55.429595 2444 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:55 ubuntu k3s[2444]: W1101 21:18:55.442724 2444 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:55 ubuntu k3s[2444]: W1101 21:18:55.445691 2444 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:55 ubuntu k3s[2444]: W1101 21:18:55.453194 2444 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653163 2444 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653283 2444 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653350 2444 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653426 2444 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653494 2444 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653552 2444 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653604 2444 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653654 2444 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653766 2444 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653864 2444 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653927 2444 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: E1101 21:18:57.653981 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:18:57 ubuntu k3s[2444]: I1101 21:18:57.654040 2444 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:18:57 ubuntu k3s[2444]: I1101 21:18:57.654064 2444 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.703293488Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.705732244Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706187 2444 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706356 2444 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706605 2444 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706640 2444 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706735 2444 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.706753 2444 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.707999 2444 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.708104 2444 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.708123 2444 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732697 2444 controller.go:83] Starting OpenAPI controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732804 2444 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732849 2444 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732894 2444 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732935 2444 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.732984 2444 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.733004 2444 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.739022 2444 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.739174 2444 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.746984 2444 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.748161 2444 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:19:02 ubuntu k3s[2444]: W1101 21:19:02.750454 2444 authorization.go:47] Authorization is disabled
Nov 01 21:19:02 ubuntu k3s[2444]: W1101 21:19:02.750521 2444 authentication.go:55] Authentication is disabled
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.750552 2444 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.800052 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.800944 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.801186 2444 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.801230 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.814237 2444 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.822332 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.825190 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.825397 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.825510 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.825633 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.825752 2444 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.834568 2444 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.846122 2444 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.916257404Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.917388931Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.917999888Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.918548032Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919209 2444 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919333 2444 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919537 2444 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919680 2444 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919739 2444 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919791 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.919908 2444 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: I1101 21:19:02.921446 2444 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.928461 2444 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:19:02 ubuntu k3s[2444]: time="2019-11-01T21:19:02.940497626Z" level=info msg="Listening on :6443"
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.940913 2444 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.940992 2444 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.941121 2444 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.941250 2444 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.941318 2444 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.941369 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:02 ubuntu k3s[2444]: E1101 21:19:02.941523 2444 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.443325939Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.543523871Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.544503 2444 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.544706 2444 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.544834045Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.544901803Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.544993 2444 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545102 2444 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545164 2444 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545214 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545324 2444 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545783 2444 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.545849 2444 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546035 2444 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546150 2444 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546208 2444 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546255 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546347 2444 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546556 2444 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546605 2444 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546708 2444 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546801 2444 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546861 2444 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.546912 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.547013 2444 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.547390 2444 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.547452 2444 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.547578 2444 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548037 2444 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548161 2444 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548222 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548332 2444 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548569 2444 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548665 2444 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548796 2444 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548897 2444 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548951 2444 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.548997 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549098 2444 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: I1101 21:19:03.548616 2444 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549320 2444 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549373 2444 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549554 2444 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549673 2444 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549730 2444 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549777 2444 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: E1101 21:19:03.549881 2444 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:19:03 ubuntu k3s[2444]: I1101 21:19:03.700399 2444 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:19:03 ubuntu k3s[2444]: I1101 21:19:03.700467 2444 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.710617394Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.710695986Z" level=info msg="Run: k3s kubectl"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.710720911Z" level=info msg="k3s is up and running"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.711071093Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:03 ubuntu k3s[2444]: time="2019-11-01T21:19:03.711140482Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:03 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:19:03 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:19:03 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:19:08 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:19:08 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 11.
Nov 01 21:19:08 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:19:08 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:19:09 ubuntu k3s[2464]: time="2019-11-01T21:19:09.547956417Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:19:09 ubuntu k3s[2464]: time="2019-11-01T21:19:09.559865493Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:19:09 ubuntu k3s[2464]: time="2019-11-01T21:19:09.561268443Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:19:09 ubuntu k3s[2464]: time="2019-11-01T21:19:09.602980338Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.604275 2464 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.604865 2464 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.619722 2464 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.619796 2464 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622285 2464 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622393 2464 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622461 2464 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622544 2464 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622614 2464 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622673 2464 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622727 2464 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622779 2464 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.622980 2464 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.623144 2464 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.623211 2464 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: E1101 21:19:09.623264 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.623343 2464 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.623385 2464 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:09 ubuntu k3s[2464]: I1101 21:19:09.666990 2464 master.go:233] Using reconciler: lease
Nov 01 21:19:10 ubuntu k3s[2464]: W1101 21:19:10.471455 2464 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:19:10 ubuntu k3s[2464]: W1101 21:19:10.494256 2464 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:10 ubuntu k3s[2464]: W1101 21:19:10.507707 2464 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:10 ubuntu k3s[2464]: W1101 21:19:10.510520 2464 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:10 ubuntu k3s[2464]: W1101 21:19:10.518024 2464 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716251 2464 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716372 2464 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716443 2464 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716501 2464 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716565 2464 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716647 2464 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716707 2464 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716758 2464 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716869 2464 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.716974 2464 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.717037 2464 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: E1101 21:19:12.717091 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:12 ubuntu k3s[2464]: I1101 21:19:12.717149 2464 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:12 ubuntu k3s[2464]: I1101 21:19:12.717174 2464 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.762810383Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.764763921Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.765724 2464 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.765921 2464 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.765946 2464 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.766884 2464 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.766936 2464 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.767063 2464 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.768541 2464 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.768607 2464 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.769824 2464 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.769928 2464 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.769948 2464 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.783525 2464 controller.go:83] Starting OpenAPI controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.783628 2464 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.783671 2464 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.783720 2464 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.783764 2464 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.796500 2464 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.797739 2464 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.801301 2464 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.801410 2464 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:19:17 ubuntu k3s[2464]: W1101 21:19:17.803507 2464 authorization.go:47] Authorization is disabled
Nov 01 21:19:17 ubuntu k3s[2464]: W1101 21:19:17.803562 2464 authentication.go:55] Authentication is disabled
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.803588 2464 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.849487 2464 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.858360 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.862462 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.863071 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.863691 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.870276 2464 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.871765 2464 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.888300 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.888859 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.889025 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.889200 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.889342 2464 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.962612260Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.963519307Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.964198968Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.965218347Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.966342 2464 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.966893 2464 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.967640 2464 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.968266 2464 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.968361 2464 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.968421 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.968725 2464 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: I1101 21:19:17.968612 2464 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.983181 2464 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:19:17 ubuntu k3s[2464]: time="2019-11-01T21:19:17.988407524Z" level=info msg="Listening on :6443"
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.989719 2464 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.990587 2464 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.991408 2464 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.992527 2464 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.993166 2464 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.993298 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:17 ubuntu k3s[2464]: E1101 21:19:17.993436 2464 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: I1101 21:19:18.069093 2464 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.495394538Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.595579640Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.595886 2464 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.596535 2464 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.596762 2464 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.596913 2464 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.596905017Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.596973 2464 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.596965331Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597025 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597145 2464 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597574 2464 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597646 2464 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597825 2464 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597936 2464 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.597994 2464 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598044 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598152 2464 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598379 2464 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598433 2464 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598561 2464 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598700 2464 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598774 2464 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598834 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.598940 2464 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599333 2464 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599399 2464 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599577 2464 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599735 2464 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599812 2464 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599870 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.599987 2464 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600258 2464 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600316 2464 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600439 2464 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600539 2464 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600600 2464 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600695 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.600806 2464 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601034 2464 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601093 2464 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601219 2464 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601327 2464 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601395 2464 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601451 2464 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: E1101 21:19:18.601553 2464 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:19:18 ubuntu k3s[2464]: I1101 21:19:18.643337 2464 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:19:18 ubuntu k3s[2464]: I1101 21:19:18.760077 2464 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:19:18 ubuntu k3s[2464]: I1101 21:19:18.760141 2464 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.762752945Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.762832556Z" level=info msg="Run: k3s kubectl"
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.762856370Z" level=info msg="k3s is up and running"
Nov 01 21:19:18 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.764297505Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:18 ubuntu k3s[2464]: time="2019-11-01T21:19:18.764382838Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:18 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:19:18 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:19:23 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:19:23 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 12.
Nov 01 21:19:23 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:19:23 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:19:24 ubuntu k3s[2508]: time="2019-11-01T21:19:24.548665184Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:19:24 ubuntu k3s[2508]: time="2019-11-01T21:19:24.560346058Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:19:24 ubuntu k3s[2508]: time="2019-11-01T21:19:24.561657713Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:19:24 ubuntu k3s[2508]: time="2019-11-01T21:19:24.602730023Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.604000 2508 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.604540 2508 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.619684 2508 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.619754 2508 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622245 2508 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622341 2508 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622406 2508 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622465 2508 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622534 2508 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622599 2508 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622653 2508 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622704 2508 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.622934 2508 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.623082 2508 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.623151 2508 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: E1101 21:19:24.623206 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.623296 2508 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.623324 2508 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:24 ubuntu k3s[2508]: I1101 21:19:24.666910 2508 master.go:233] Using reconciler: lease
Nov 01 21:19:25 ubuntu k3s[2508]: W1101 21:19:25.391527 2508 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:19:25 ubuntu k3s[2508]: W1101 21:19:25.414393 2508 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:25 ubuntu k3s[2508]: W1101 21:19:25.427415 2508 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:25 ubuntu k3s[2508]: W1101 21:19:25.430260 2508 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:25 ubuntu k3s[2508]: W1101 21:19:25.437739 2508 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639449 2508 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639574 2508 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639645 2508 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639709 2508 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639772 2508 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639830 2508 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639881 2508 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.639930 2508 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.640044 2508 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.640142 2508 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.640205 2508 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: E1101 21:19:27.640260 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:27 ubuntu k3s[2508]: I1101 21:19:27.640320 2508 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:27 ubuntu k3s[2508]: I1101 21:19:27.640344 2508 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.703824910Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.704881752Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.706772 2508 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707002 2508 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707029 2508 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707356 2508 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707397 2508 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707472 2508 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707554 2508 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.707576 2508 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.708857 2508 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.708918 2508 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.708945 2508 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.709041 2508 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.709085 2508 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.709125 2508 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.708872 2508 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.708921 2508 controller.go:83] Starting OpenAPI controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.725803 2508 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.726958 2508 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.743316 2508 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.743455 2508 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:19:32 ubuntu k3s[2508]: W1101 21:19:32.745961 2508 authorization.go:47] Authorization is disabled
Nov 01 21:19:32 ubuntu k3s[2508]: W1101 21:19:32.746019 2508 authentication.go:55] Authentication is disabled
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.746047 2508 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.814389 2508 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.842812 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.843783 2508 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.844837 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.845674 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.846480 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.847322 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.848476 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.863077 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.863999 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.864961 2508 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.885199516Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.886375654Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.887013222Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.887953621Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.890264 2508 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.890939 2508 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.891572 2508 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.892130 2508 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.892669 2508 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.893539 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.894455 2508 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.908254 2508 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:19:32 ubuntu k3s[2508]: time="2019-11-01T21:19:32.913900218Z" level=info msg="Listening on :6443"
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914301 2508 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914389 2508 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914530 2508 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914633 2508 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914698 2508 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914753 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.914861 2508 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:19:32 ubuntu k3s[2508]: E1101 21:19:32.922590 2508 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.923791 2508 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:19:32 ubuntu k3s[2508]: I1101 21:19:32.925280 2508 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.416613322Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.516825187Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.517082 2508 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.517479 2508 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.517731 2508 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.517987195Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.518058472Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.518167 2508 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.518364 2508 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.518501 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.518764 2508 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.519393 2508 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.519486 2508 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.519674 2508 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.520032 2508 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.520176 2508 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.520254 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.520535 2508 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521152 2508 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521241 2508 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521365 2508 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521489 2508 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521552 2508 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521603 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.521702 2508 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522064 2508 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522175 2508 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522311 2508 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522457 2508 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522521 2508 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522569 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.522681 2508 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523047 2508 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523109 2508 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523273 2508 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523451 2508 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523513 2508 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523573 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523674 2508 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523885 2508 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.523940 2508 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.524078 2508 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.524186 2508 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.524244 2508 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.524294 2508 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: E1101 21:19:33.524389 2508 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:19:33 ubuntu k3s[2508]: I1101 21:19:33.537960 2508 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.680135717Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.680263864Z" level=info msg="Run: k3s kubectl"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.680321623Z" level=info msg="k3s is up and running"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.681022394Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:33 ubuntu k3s[2508]: time="2019-11-01T21:19:33.681156634Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:33 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:19:33 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:19:33 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:19:38 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:19:38 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 13.
Nov 01 21:19:38 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:19:38 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:19:39 ubuntu k3s[2529]: time="2019-11-01T21:19:39.292095723Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:19:39 ubuntu k3s[2529]: time="2019-11-01T21:19:39.303892746Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:19:39 ubuntu k3s[2529]: time="2019-11-01T21:19:39.305259789Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:19:39 ubuntu k3s[2529]: time="2019-11-01T21:19:39.346832762Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.348192 2529 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.348795 2529 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.363653 2529 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.363726 2529 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366075 2529 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366175 2529 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366244 2529 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366304 2529 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366376 2529 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366426 2529 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366478 2529 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366551 2529 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366645 2529 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366725 2529 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366791 2529 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: E1101 21:19:39.366845 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.366914 2529 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.366942 2529 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:39 ubuntu k3s[2529]: I1101 21:19:39.409350 2529 master.go:233] Using reconciler: lease
Nov 01 21:19:40 ubuntu k3s[2529]: W1101 21:19:40.154953 2529 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:19:40 ubuntu k3s[2529]: W1101 21:19:40.177840 2529 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:40 ubuntu k3s[2529]: W1101 21:19:40.191115 2529 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:40 ubuntu k3s[2529]: W1101 21:19:40.193974 2529 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:40 ubuntu k3s[2529]: W1101 21:19:40.201500 2529 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.409885 2529 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410003 2529 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410077 2529 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410201 2529 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410267 2529 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410334 2529 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410385 2529 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410435 2529 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410572 2529 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410672 2529 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410736 2529 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: E1101 21:19:42.410792 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:42 ubuntu k3s[2529]: I1101 21:19:42.410856 2529 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:42 ubuntu k3s[2529]: I1101 21:19:42.410882 2529 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.460490411Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.462338699Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463243 2529 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463405 2529 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463429 2529 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463673 2529 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463701 2529 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.463747 2529 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.465896 2529 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.467088 2529 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.467143 2529 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.481132 2529 controller.go:83] Starting OpenAPI controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.481749 2529 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.482103 2529 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.482435 2529 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.482971 2529 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.483358 2529 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.483676 2529 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.488385 2529 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.490062 2529 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.500708 2529 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.501388 2529 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:19:47 ubuntu k3s[2529]: W1101 21:19:47.503898 2529 authorization.go:47] Authorization is disabled
Nov 01 21:19:47 ubuntu k3s[2529]: W1101 21:19:47.504525 2529 authentication.go:55] Authentication is disabled
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.505144 2529 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.586391 2529 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.587129 2529 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.595307 2529 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.603188 2529 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.603412 2529 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.603603 2529 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.664082 2529 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.673957315Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.675366225Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.676181187Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.677192162Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.676396 2529 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.678485 2529 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.679640 2529 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.679793 2529 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.679906 2529 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.679966 2529 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.680018 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.680123 2529 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.680357 2529 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:19:47 ubuntu k3s[2529]: I1101 21:19:47.664096 2529 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:19:47 ubuntu k3s[2529]: time="2019-11-01T21:19:47.703154175Z" level=info msg="Listening on :6443"
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704252 2529 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704347 2529 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704494 2529 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704710 2529 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704878 2529 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.704960 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:47 ubuntu k3s[2529]: E1101 21:19:47.705086 2529 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.206670507Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.307169608Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308293 2529 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308386 2529 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308534 2529 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308665 2529 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308731 2529 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308783 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.308827308Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.308891177Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.308905 2529 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309370 2529 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309455 2529 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309635 2529 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309775 2529 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309840 2529 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309889 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.309994 2529 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310228 2529 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310289 2529 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310397 2529 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: I1101 21:19:48.310414 2529 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310517 2529 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310576 2529 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310624 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.310720 2529 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311073 2529 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311132 2529 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311242 2529 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311357 2529 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311421 2529 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311472 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311587 2529 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311856 2529 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.311924 2529 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312036 2529 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312149 2529 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312216 2529 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312271 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312469 2529 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312800 2529 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312857 2529 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.312979 2529 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.313096 2529 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.313155 2529 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.313202 2529 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: E1101 21:19:48.313300 2529 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:19:48 ubuntu k3s[2529]: I1101 21:19:48.457744 2529 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:19:48 ubuntu k3s[2529]: I1101 21:19:48.457801 2529 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:19:48 ubuntu k3s[2529]: I1101 21:19:48.457833 2529 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.467562875Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.467652743Z" level=info msg="Run: k3s kubectl"
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.467677428Z" level=info msg="k3s is up and running"
Nov 01 21:19:48 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.469250093Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:48 ubuntu k3s[2529]: time="2019-11-01T21:19:48.469346684Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:19:48 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:19:48 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:19:53 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:19:53 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 14.
Nov 01 21:19:53 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:19:53 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:19:54 ubuntu k3s[2566]: time="2019-11-01T21:19:54.298244128Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:19:54 ubuntu k3s[2566]: time="2019-11-01T21:19:54.310106323Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:19:54 ubuntu k3s[2566]: time="2019-11-01T21:19:54.311443828Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:19:54 ubuntu k3s[2566]: time="2019-11-01T21:19:54.352911199Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.354431 2566 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.354967 2566 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.371444 2566 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.371516 2566 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.373874 2566 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.373972 2566 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374053 2566 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374112 2566 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374188 2566 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374264 2566 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374317 2566 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374373 2566 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374463 2566 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374535 2566 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374599 2566 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: E1101 21:19:54.374653 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.374728 2566 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.374754 2566 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:19:54 ubuntu k3s[2566]: I1101 21:19:54.416686 2566 master.go:233] Using reconciler: lease
Nov 01 21:19:55 ubuntu k3s[2566]: W1101 21:19:55.119759 2566 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:19:55 ubuntu k3s[2566]: W1101 21:19:55.142808 2566 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:55 ubuntu k3s[2566]: W1101 21:19:55.155987 2566 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:55 ubuntu k3s[2566]: W1101 21:19:55.158851 2566 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:55 ubuntu k3s[2566]: W1101 21:19:55.166560 2566 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375361 2566 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375474 2566 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375540 2566 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375600 2566 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375666 2566 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375723 2566 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375773 2566 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375825 2566 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.375933 2566 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.376036 2566 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.376100 2566 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: E1101 21:19:57.376155 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:19:57 ubuntu k3s[2566]: I1101 21:19:57.376215 2566 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:19:57 ubuntu k3s[2566]: I1101 21:19:57.376241 2566 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.456840271Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.457075 2566 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.456901658Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.460431 2566 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.460500 2566 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.461387 2566 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.461433 2566 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.468716 2566 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.468823 2566 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.469037 2566 controller.go:83] Starting OpenAPI controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.469104 2566 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.469153 2566 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.469198 2566 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.469239 2566 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.476780 2566 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.476836 2566 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.476894 2566 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.476914 2566 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.485504 2566 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.486884 2566 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.495250 2566 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.495358 2566 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:20:02 ubuntu k3s[2566]: W1101 21:20:02.497581 2566 authorization.go:47] Authorization is disabled
Nov 01 21:20:02 ubuntu k3s[2566]: W1101 21:20:02.497634 2566 authentication.go:55] Authentication is disabled
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.497663 2566 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.566877 2566 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.566877 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.566995 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.567177 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.567345 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.579353 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.579706 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.579863 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.579977 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.580125 2566 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.586867 2566 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.605716 2566 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.662416 2566 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:20:02 ubuntu k3s[2566]: I1101 21:20:02.663409 2566 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.673094 2566 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.715328276Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.716601376Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.717438560Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.718099951Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.719760 2566 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.719853 2566 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.720107 2566 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.720273 2566 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.720994 2566 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.721084 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.721237 2566 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: time="2019-11-01T21:20:02.734269087Z" level=info msg="Listening on :6443"
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.734601 2566 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.734676 2566 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.734792 2566 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.734884 2566 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.734946 2566 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.735009 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:02 ubuntu k3s[2566]: E1101 21:20:02.735112 2566 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.236561940Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.336739923Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.337344 2566 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.337448 2566 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.337889 2566 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.338071577Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338133 2566 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.338140872Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338214 2566 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338279 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338398 2566 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338801 2566 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.338870 2566 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339003 2566 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339107 2566 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339163 2566 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339211 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339324 2566 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339567 2566 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339621 2566 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339737 2566 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339879 2566 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339937 2566 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.339987 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340082 2566 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340474 2566 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340539 2566 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340703 2566 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340808 2566 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340869 2566 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.340919 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341030 2566 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341364 2566 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341433 2566 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341553 2566 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341643 2566 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341699 2566 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341745 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.341844 2566 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342063 2566 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342117 2566 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342281 2566 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342382 2566 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342440 2566 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342488 2566 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: E1101 21:20:03.342582 2566 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:20:03 ubuntu k3s[2566]: I1101 21:20:03.421213 2566 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:20:03 ubuntu k3s[2566]: I1101 21:20:03.451686 2566 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:20:03 ubuntu k3s[2566]: I1101 21:20:03.451749 2566 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:20:03 ubuntu k3s[2566]: I1101 21:20:03.451783 2566 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:20:03 ubuntu k3s[2566]: I1101 21:20:03.477082 2566 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.504110689Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.504196668Z" level=info msg="Run: k3s kubectl"
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.504223723Z" level=info msg="k3s is up and running"
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.504694323Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:03 ubuntu k3s[2566]: time="2019-11-01T21:20:03.504782080Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:03 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:20:03 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:20:03 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:20:08 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:20:08 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 15.
Nov 01 21:20:08 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:20:08 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:20:09 ubuntu k3s[2589]: time="2019-11-01T21:20:09.294894455Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:20:09 ubuntu k3s[2589]: time="2019-11-01T21:20:09.306820121Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:20:09 ubuntu k3s[2589]: time="2019-11-01T21:20:09.308235311Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:20:09 ubuntu k3s[2589]: time="2019-11-01T21:20:09.350030446Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.351330 2589 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.351878 2589 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.366824 2589 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.366902 2589 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369312 2589 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369408 2589 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369473 2589 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369538 2589 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369619 2589 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369672 2589 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369724 2589 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369777 2589 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369877 2589 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.369949 2589 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.370011 2589 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: E1101 21:20:09.370065 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.370176 2589 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.370205 2589 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:09 ubuntu k3s[2589]: I1101 21:20:09.412598 2589 master.go:233] Using reconciler: lease
Nov 01 21:20:10 ubuntu k3s[2589]: W1101 21:20:10.136282 2589 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:20:10 ubuntu k3s[2589]: W1101 21:20:10.159228 2589 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:10 ubuntu k3s[2589]: W1101 21:20:10.172366 2589 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:10 ubuntu k3s[2589]: W1101 21:20:10.175236 2589 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:10 ubuntu k3s[2589]: W1101 21:20:10.182705 2589 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385078 2589 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385196 2589 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385264 2589 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385322 2589 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385385 2589 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385442 2589 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385492 2589 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385545 2589 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385653 2589 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385751 2589 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385812 2589 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: E1101 21:20:12.385864 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:12 ubuntu k3s[2589]: I1101 21:20:12.385922 2589 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:12 ubuntu k3s[2589]: I1101 21:20:12.385947 2589 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.445748895Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.447507613Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.448654 2589 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.450130 2589 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.450197 2589 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.450321 2589 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.450342 2589 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.451112 2589 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.451254 2589 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.451278 2589 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.451352 2589 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453367 2589 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453431 2589 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453489 2589 controller.go:83] Starting OpenAPI controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453534 2589 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453574 2589 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453617 2589 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.453655 2589 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.475826 2589 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.475969 2589 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.477170 2589 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:17 ubuntu k3s[2589]: W1101 21:20:17.478570 2589 authorization.go:47] Authorization is disabled
Nov 01 21:20:17 ubuntu k3s[2589]: W1101 21:20:17.478618 2589 authentication.go:55] Authentication is disabled
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.478645 2589 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.479479 2589 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.560942 2589 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.577684 2589 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.579555 2589 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.600385 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.600736 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.600982 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.601148 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.601269 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.601386 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.604431 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.605112 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.605298 2589 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.605456 2589 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.653320594Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.654286646Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.654821227Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.655259846Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:20:17 ubuntu k3s[2589]: I1101 21:20:17.655828 2589 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.655828 2589 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.655998 2589 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.656125 2589 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.656225 2589 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.656283 2589 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.656334 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.656431 2589 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.665761 2589 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:20:17 ubuntu k3s[2589]: time="2019-11-01T21:20:17.671038280Z" level=info msg="Listening on :6443"
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671402 2589 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671485 2589 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671621 2589 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671753 2589 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671817 2589 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671870 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:17 ubuntu k3s[2589]: E1101 21:20:17.671993 2589 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.173977693Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.274187602Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.274726 2589 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.275092 2589 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.275317 2589 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.275511 2589 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.275649 2589 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.275768 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.276222 2589 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.276371366Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.276748302Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.277258 2589 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.277389 2589 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.277617 2589 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.277810 2589 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.277944 2589 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.278067 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.278286 2589 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.278740 2589 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.278863 2589 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.279090 2589 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.279496 2589 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.279650 2589 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.279813 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.280035 2589 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.280777 2589 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.280908 2589 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.281140 2589 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.281329 2589 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.281459 2589 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.281581 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.281789 2589 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.282290 2589 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.282406 2589 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.282616 2589 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.282790 2589 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.282921 2589 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.283038 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.283227 2589 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.283649 2589 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.283757 2589 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.283965 2589 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.284157 2589 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.284283 2589 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.284396 2589 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: E1101 21:20:18.284597 2589 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:20:18 ubuntu k3s[2589]: I1101 21:20:18.411819 2589 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:20:18 ubuntu k3s[2589]: I1101 21:20:18.443045 2589 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:20:18 ubuntu k3s[2589]: I1101 21:20:18.443101 2589 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:20:18 ubuntu k3s[2589]: I1101 21:20:18.443136 2589 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.449301536Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.449383886Z" level=info msg="Run: k3s kubectl"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.449409552Z" level=info msg="k3s is up and running"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.449773396Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:18 ubuntu k3s[2589]: time="2019-11-01T21:20:18.449855061Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:18 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:20:18 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:20:18 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:20:23 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:20:23 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 16.
Nov 01 21:20:23 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:20:23 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:20:24 ubuntu k3s[2619]: time="2019-11-01T21:20:24.298652432Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:20:24 ubuntu k3s[2619]: time="2019-11-01T21:20:24.310769917Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:20:24 ubuntu k3s[2619]: time="2019-11-01T21:20:24.312092109Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:20:24 ubuntu k3s[2619]: time="2019-11-01T21:20:24.353542783Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.354835 2619 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.355364 2619 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.370592 2619 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.370670 2619 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373170 2619 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373270 2619 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373355 2619 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373416 2619 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373493 2619 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373554 2619 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373604 2619 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373660 2619 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.373862 2619 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.374014 2619 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.374109 2619 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: E1101 21:20:24.374166 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.374227 2619 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.374254 2619 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:24 ubuntu k3s[2619]: I1101 21:20:24.418598 2619 master.go:233] Using reconciler: lease
Nov 01 21:20:25 ubuntu k3s[2619]: W1101 21:20:25.138334 2619 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:20:25 ubuntu k3s[2619]: W1101 21:20:25.161167 2619 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:25 ubuntu k3s[2619]: W1101 21:20:25.174136 2619 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:25 ubuntu k3s[2619]: W1101 21:20:25.176927 2619 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:25 ubuntu k3s[2619]: W1101 21:20:25.184214 2619 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384336 2619 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384452 2619 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384525 2619 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384595 2619 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384682 2619 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384741 2619 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384800 2619 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384851 2619 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.384959 2619 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.385058 2619 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.385129 2619 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: E1101 21:20:27.385181 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:27 ubuntu k3s[2619]: I1101 21:20:27.385239 2619 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:27 ubuntu k3s[2619]: I1101 21:20:27.385263 2619 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.442607360Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.444575576Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.445577 2619 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.445952 2619 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.447737 2619 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448687 2619 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448741 2619 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448854 2619 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448875 2619 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448920 2619 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.448937 2619 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.450049 2619 controller.go:83] Starting OpenAPI controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.450131 2619 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.450173 2619 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.450227 2619 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.450271 2619 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.464899 2619 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.466538 2619 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.479959 2619 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.480097 2619 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.465183 2619 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.481043 2619 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:20:32 ubuntu k3s[2619]: W1101 21:20:32.483773 2619 authorization.go:47] Authorization is disabled
Nov 01 21:20:32 ubuntu k3s[2619]: W1101 21:20:32.483834 2619 authentication.go:55] Authentication is disabled
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.483859 2619 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.590481 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.605626 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.606018 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.606300 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.606479 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.606615 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.606745 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.615330 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.615589 2619 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.615773 2619 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.640094 2619 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.646840873Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.647743705Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.648256064Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.648715072Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.648943 2619 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649287 2619 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649386 2619 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649509 2619 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649613 2619 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.649628 2619 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649679 2619 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649730 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.649830 2619 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.658763 2619 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:20:32 ubuntu k3s[2619]: time="2019-11-01T21:20:32.675204798Z" level=info msg="Listening on :6443"
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675551 2619 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675619 2619 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675737 2619 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675825 2619 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675883 2619 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.675932 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: E1101 21:20:32.676039 2619 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:20:32 ubuntu k3s[2619]: I1101 21:20:32.688355 2619 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.183892731Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.284109303Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.285309202Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.285408218Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.285605 2619 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.285672 2619 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.285792 2619 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.285899 2619 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.285962 2619 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286012 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286118 2619 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286478 2619 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286536 2619 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286669 2619 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286820 2619 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286882 2619 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.286932 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287065 2619 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287293 2619 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287347 2619 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287464 2619 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287618 2619 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287687 2619 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287750 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.287859 2619 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288221 2619 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288288 2619 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288396 2619 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288503 2619 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288565 2619 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288646 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.288750 2619 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289004 2619 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289059 2619 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289158 2619 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289260 2619 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289319 2619 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289374 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289460 2619 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289664 2619 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289718 2619 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289817 2619 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289920 2619 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.289974 2619 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.290030 2619 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: E1101 21:20:33.290137 2619 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:20:33 ubuntu k3s[2619]: I1101 21:20:33.300664 2619 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:20:33 ubuntu k3s[2619]: I1101 21:20:33.439888 2619 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:20:33 ubuntu k3s[2619]: I1101 21:20:33.439945 2619 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:20:33 ubuntu k3s[2619]: I1101 21:20:33.439978 2619 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.460039490Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.460124673Z" level=info msg="Run: k3s kubectl"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.460149950Z" level=info msg="k3s is up and running"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.460505183Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:33 ubuntu k3s[2619]: time="2019-11-01T21:20:33.460577811Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:33 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:20:33 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:20:33 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:20:38 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:20:38 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 17.
Nov 01 21:20:38 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:20:38 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:20:39 ubuntu k3s[2642]: time="2019-11-01T21:20:39.304179733Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:20:39 ubuntu k3s[2642]: time="2019-11-01T21:20:39.316572331Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:20:39 ubuntu k3s[2642]: time="2019-11-01T21:20:39.318029577Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:20:39 ubuntu k3s[2642]: time="2019-11-01T21:20:39.359634609Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.360979 2642 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.361523 2642 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.376580 2642 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.376676 2642 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379069 2642 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379174 2642 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379234 2642 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379297 2642 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379382 2642 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379440 2642 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379494 2642 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379537 2642 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379680 2642 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379757 2642 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379820 2642 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: E1101 21:20:39.379881 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.379939 2642 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.379964 2642 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:39 ubuntu k3s[2642]: I1101 21:20:39.423378 2642 master.go:233] Using reconciler: lease
Nov 01 21:20:40 ubuntu k3s[2642]: W1101 21:20:40.154891 2642 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:20:40 ubuntu k3s[2642]: W1101 21:20:40.177957 2642 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:40 ubuntu k3s[2642]: W1101 21:20:40.191154 2642 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:40 ubuntu k3s[2642]: W1101 21:20:40.194018 2642 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:40 ubuntu k3s[2642]: W1101 21:20:40.201501 2642 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.409813 2642 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.409927 2642 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.409994 2642 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410052 2642 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410116 2642 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410173 2642 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410224 2642 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410276 2642 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410385 2642 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410506 2642 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410570 2642 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: E1101 21:20:42.410623 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:42 ubuntu k3s[2642]: I1101 21:20:42.410685 2642 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:42 ubuntu k3s[2642]: I1101 21:20:42.410711 2642 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.457517143Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.459961405Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.460320 2642 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.460468 2642 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.460490 2642 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463494 2642 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463562 2642 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463614 2642 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463739 2642 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463810 2642 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.463829 2642 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.476864 2642 controller.go:83] Starting OpenAPI controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.476949 2642 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.476998 2642 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.477039 2642 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.477078 2642 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.477122 2642 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.477151 2642 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.486760 2642 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.487855 2642 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.503949 2642 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.504102 2642 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:20:47 ubuntu k3s[2642]: W1101 21:20:47.506319 2642 authorization.go:47] Authorization is disabled
Nov 01 21:20:47 ubuntu k3s[2642]: W1101 21:20:47.506382 2642 authentication.go:55] Authentication is disabled
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.506409 2642 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.588246 2642 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.591824 2642 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.608764 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.611855 2642 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.619333 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.619604 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.622237 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.623458 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.624370 2642 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.625354 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.634682 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.634964 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.635108 2642 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:20:47 ubuntu k3s[2642]: I1101 21:20:47.660741 2642 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.672324970Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.673587128Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.674731548Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.675412830Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676139 2642 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676220 2642 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676420 2642 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676566 2642 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676682 2642 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676739 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.676876 2642 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.681369 2642 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:20:47 ubuntu k3s[2642]: time="2019-11-01T21:20:47.698055304Z" level=info msg="Listening on :6443"
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698375 2642 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698457 2642 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698582 2642 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698695 2642 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698762 2642 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698821 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:47 ubuntu k3s[2642]: E1101 21:20:47.698933 2642 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.200713853Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.300924416Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301249 2642 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301328 2642 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301473 2642 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301586 2642 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301646 2642 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301697 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.301811 2642 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302227 2642 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302289 2642 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302388 2642 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302490 2642 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302546 2642 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302591 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302677 2642 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302869 2642 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.302916 2642 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303042 2642 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303143 2642 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303198 2642 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303247 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303333 2642 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303716 2642 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303788 2642 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303892 2642 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.303990 2642 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304048 2642 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304096 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304187 2642 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304390 2642 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304437 2642 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304532 2642 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304688 2642 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304749 2642 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304802 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.304911 2642 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305155 2642 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305210 2642 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305354 2642 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305506 2642 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305576 2642 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305631 2642 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: E1101 21:20:48.305745 2642 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.310218235Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.310907701Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:20:48 ubuntu k3s[2642]: I1101 21:20:48.323218 2642 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:20:48 ubuntu k3s[2642]: I1101 21:20:48.454718 2642 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:20:48 ubuntu k3s[2642]: I1101 21:20:48.454773 2642 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:20:48 ubuntu k3s[2642]: I1101 21:20:48.454819 2642 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:20:48 ubuntu k3s[2642]: I1101 21:20:48.477564 2642 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.483130158Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.483624259Z" level=info msg="Run: k3s kubectl"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.483687906Z" level=info msg="k3s is up and running"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.485500459Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:48 ubuntu k3s[2642]: time="2019-11-01T21:20:48.485586624Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:20:48 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:20:48 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:20:48 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:20:53 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:20:53 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 18.
Nov 01 21:20:53 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:20:53 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:20:54 ubuntu k3s[2743]: time="2019-11-01T21:20:54.296912328Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:20:54 ubuntu k3s[2743]: time="2019-11-01T21:20:54.309493041Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:20:54 ubuntu k3s[2743]: time="2019-11-01T21:20:54.310819976Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:20:54 ubuntu k3s[2743]: time="2019-11-01T21:20:54.352113912Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.353468 2743 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.354015 2743 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.368947 2743 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.369013 2743 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371343 2743 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371441 2743 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371510 2743 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371583 2743 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371656 2743 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371707 2743 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371759 2743 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.371814 2743 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.372008 2743 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.372170 2743 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.372235 2743 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: E1101 21:20:54.372294 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.372355 2743 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.372382 2743 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:20:54 ubuntu k3s[2743]: I1101 21:20:54.416094 2743 master.go:233] Using reconciler: lease
Nov 01 21:20:55 ubuntu k3s[2743]: W1101 21:20:55.158092 2743 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:20:55 ubuntu k3s[2743]: W1101 21:20:55.181401 2743 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:55 ubuntu k3s[2743]: W1101 21:20:55.194486 2743 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:55 ubuntu k3s[2743]: W1101 21:20:55.197386 2743 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:55 ubuntu k3s[2743]: W1101 21:20:55.205037 2743 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405434 2743 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405552 2743 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405624 2743 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405681 2743 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405746 2743 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405803 2743 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405854 2743 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.405903 2743 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.406012 2743 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.406116 2743 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.406177 2743 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: E1101 21:20:57.406232 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:20:57 ubuntu k3s[2743]: I1101 21:20:57.406290 2743 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:20:57 ubuntu k3s[2743]: I1101 21:20:57.406314 2743 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.491011758Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.492181512Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.493852 2743 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.494050 2743 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.494093 2743 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.496744 2743 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.496882 2743 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.497034 2743 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.497070 2743 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.498099 2743 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.498154 2743 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.499305 2743 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.499350 2743 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.521993 2743 controller.go:83] Starting OpenAPI controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.522085 2743 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.522131 2743 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.522174 2743 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.522216 2743 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.530014 2743 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.531114 2743 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.548538 2743 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.548709 2743 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:21:02 ubuntu k3s[2743]: W1101 21:21:02.550980 2743 authorization.go:47] Authorization is disabled
Nov 01 21:21:02 ubuntu k3s[2743]: W1101 21:21:02.551068 2743 authentication.go:55] Authentication is disabled
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.551096 2743 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.595377 2743 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.626481 2743 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.645487 2743 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.662712 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.664477 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.665582 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.666600 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.667574 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.680493 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.680798 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.680963 2743 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.682109 2743 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:21:02 ubuntu k3s[2743]: I1101 21:21:02.698972 2743 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.725697 2743 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.727709205Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.728715443Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.729292968Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.729805532Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730465 2743 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730551 2743 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730686 2743 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730783 2743 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730841 2743 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.730903 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.731010 2743 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: time="2019-11-01T21:21:02.768745401Z" level=info msg="Listening on :6443"
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.769777 2743 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.770653 2743 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.771843 2743 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.773564 2743 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.774838 2743 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.775820 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:02 ubuntu k3s[2743]: E1101 21:21:02.777025 2743 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.279836460Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.379996457Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.380883 2743 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.381170 2743 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.381291671Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.381355892Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.381556 2743 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.382129 2743 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.382238 2743 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.382519 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.383010 2743 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.383711 2743 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.383791 2743 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384105 2743 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384394 2743 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384470 2743 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384525 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384667 2743 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384905 2743 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.384961 2743 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385082 2743 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385172 2743 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385228 2743 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385277 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385366 2743 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385703 2743 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385756 2743 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385895 2743 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.385981 2743 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386036 2743 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386083 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386166 2743 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386365 2743 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386410 2743 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386521 2743 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386650 2743 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386714 2743 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386761 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.386865 2743 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.387098 2743 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.387159 2743 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.387954 2743 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.388126 2743 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.388189 2743 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.388238 2743 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: I1101 21:21:03.388252 2743 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Nov 01 21:21:03 ubuntu k3s[2743]: E1101 21:21:03.388343 2743 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Nov 01 21:21:03 ubuntu k3s[2743]: I1101 21:21:03.488286 2743 controller.go:107] OpenAPI AggregationController: Processing item
Nov 01 21:21:03 ubuntu k3s[2743]: I1101 21:21:03.488353 2743 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Nov 01 21:21:03 ubuntu k3s[2743]: I1101 21:21:03.488392 2743 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Nov 01 21:21:03 ubuntu k3s[2743]: I1101 21:21:03.511854 2743 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.558406910Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.559244022Z" level=info msg="Run: k3s kubectl"
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.559730512Z" level=info msg="k3s is up and running"
Nov 01 21:21:03 ubuntu systemd[1]: Started Lightweight Kubernetes.
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.561580492Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:21:03 ubuntu k3s[2743]: time="2019-11-01T21:21:03.562472714Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Nov 01 21:21:03 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 01 21:21:03 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 01 21:21:08 ubuntu systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Nov 01 21:21:08 ubuntu systemd[1]: k3s.service: Scheduled restart job, restart counter is at 19.
Nov 01 21:21:08 ubuntu systemd[1]: Stopped Lightweight Kubernetes.
Nov 01 21:21:08 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Nov 01 21:21:09 ubuntu k3s[2776]: time="2019-11-01T21:21:09.303701589Z" level=info msg="Starting k3s v0.9.1 (755bd1c6)"
Nov 01 21:21:09 ubuntu k3s[2776]: time="2019-11-01T21:21:09.316495398Z" level=info msg="Kine listening on unix://kine.sock"
Nov 01 21:21:09 ubuntu k3s[2776]: time="2019-11-01T21:21:09.317880184Z" level=info msg="Fetching bootstrap data from etcd"
Nov 01 21:21:09 ubuntu k3s[2776]: time="2019-11-01T21:21:09.359352086Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.360697 2776 server.go:586] external host was not specified, using 192.168.0.37
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.361236 2776 server.go:160] Version: v1.15.4-k3s.1
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.376997 2776 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.377075 2776 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379426 2776 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379524 2776 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379592 2776 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379650 2776 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379719 2776 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379771 2776 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379819 2776 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.379860 2776 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.380127 2776 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.380287 2776 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.380355 2776 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: E1101 21:21:09.380411 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.380470 2776 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.380510 2776 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:21:09 ubuntu k3s[2776]: I1101 21:21:09.424559 2776 master.go:233] Using reconciler: lease
Nov 01 21:21:10 ubuntu k3s[2776]: W1101 21:21:10.192344 2776 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Nov 01 21:21:10 ubuntu k3s[2776]: W1101 21:21:10.215295 2776 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:21:10 ubuntu k3s[2776]: W1101 21:21:10.228409 2776 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:21:10 ubuntu k3s[2776]: W1101 21:21:10.231302 2776 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:21:10 ubuntu k3s[2776]: W1101 21:21:10.238770 2776 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461302 2776 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461417 2776 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461485 2776 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461544 2776 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461629 2776 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461688 2776 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461741 2776 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461806 2776 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.461910 2776 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.462049 2776 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.462113 2776 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: E1101 21:21:12.462166 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Nov 01 21:21:12 ubuntu k3s[2776]: I1101 21:21:12.462229 2776 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
Nov 01 21:21:12 ubuntu k3s[2776]: I1101 21:21:12.462255 2776 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.494140443Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.496084552Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.496964 2776 secure_serving.go:116] Serving securely on 127.0.0.1:6444
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.497125 2776 apiservice_controller.go:94] Starting APIServiceRegistrationController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.497166 2776 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.498013 2776 available_controller.go:376] Starting AvailableConditionController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.498078 2776 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.498169 2776 autoregister_controller.go:140] Starting autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.498191 2776 cache.go:32] Waiting for caches to sync for autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.498348 2776 crd_finalizer.go:255] Starting CRDFinalizer
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.499019 2776 controller.go:81] Starting OpenAPI AggregationController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.514847 2776 controllermanager.go:160] Version: v1.15.4-k3s.1
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.516017 2776 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.523670 2776 crdregistration_controller.go:112] Starting crd-autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.524335 2776 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.524940 2776 controller.go:83] Starting OpenAPI controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.525397 2776 customresource_discovery_controller.go:208] Starting DiscoveryController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.525802 2776 naming_controller.go:288] Starting NamingConditionController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.525839 2776 establishing_controller.go:73] Starting EstablishingController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.525861 2776 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.561447 2776 server.go:142] Version: v1.15.4-k3s.1
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.561580 2776 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 01 21:21:17 ubuntu k3s[2776]: W1101 21:21:17.563806 2776 authorization.go:47] Authorization is disabled
Nov 01 21:21:17 ubuntu k3s[2776]: W1101 21:21:17.563865 2776 authentication.go:55] Authentication is disabled
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.563890 2776 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.622799 2776 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.636138 2776 cache.go:39] Caches are synced for autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.642127 2776 controller_utils.go:1036] Caches are synced for crd-autoregister controller
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.646297 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.647063 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.677908 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678149 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678314 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678475 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678648 2776 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678803 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.678928 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.679052 2776 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Nov 01 21:21:17 ubuntu k3s[2776]: I1101 21:21:17.720220 2776 cache.go:39] Caches are synced for AvailableConditionController controller
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.783900893Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.784894706Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.785331049Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.785731596Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786386 2776 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786470 2776 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786594 2776 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786694 2776 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786752 2776 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786804 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.786934 2776 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: time="2019-11-01T21:21:17.809743478Z" level=info msg="Listening on :6443"
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810114 2776 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810198 2776 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810362 2776 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810482 2776 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810551 2776 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810612 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.810730 2776 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
Nov 01 21:21:17 ubuntu k3s[2776]: E1101 21:21:17.830988 2776 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Nov 01 21:21:18 ubuntu k3s[2776]: time="2019-11-01T21:21:18.312365992Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Nov 01 21:21:18 ubuntu k3s[2776]: time="2019-11-01T21:21:18.412685304Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.412953 2776 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413037 2776 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413171 2776 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413351 2776 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413419 2776 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413471 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.413591 2776 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414011 2776 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414081 2776 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414217 2776 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414327 2776 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414393 2776 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414448 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414557 2776 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414800 2776 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: time="2019-11-01T21:21:18.414795409Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414862 2776 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: time="2019-11-01T21:21:18.414858704Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.37:6443 -t ${NODE_TOKEN}"
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.414976 2776 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415069 2776 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415126 2776 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415179 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415285 2776 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415631 2776 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415691 2776 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415813 2776 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415903 2776 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.415957 2776 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416002 2776 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416100 2776 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416358 2776 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416420 2776 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416534 2776 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416618 2776 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
Nov 01 21:21:18 ubuntu k3s[2776]: E1101 21:21:18.416729 2776 prometheus.go:203] failed
View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment