Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
un Rancher agent in k8s v1.24 - no secret exists for service account cattle-system/cattle

Run Rancher agent in k8s v1.24 - no secret exists for service account cattle-system/cattle

I noticed some errors related to secret for service user acctount to import the Kubernetes cluster version v1.24 for rancher server v2.6, below I show the reason for that, and explain the reason based on the documentation. This error starts to happen after the K8s version 1.24 when some new features were enabled in Kubernetes version 1.24, including how the tokens are generated. Looks like many people departed with the same issue, see here. k8s

The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide.

Kubernetes version: 1.24.4 Rancher Server Version: 2.6 Error Msg:

level=fatal msg="looking up cattle-system/cattle ca/token: no secret exists for service account cattle-system/cattle"
Click to expand!
INFO: Environment: CATTLE_ADDRESS=192.168.200.139 CATTLE_CA_CHECKSUM=416b2f4d272acce6f53913b24a9829e7e393e7a7e059aca2441f51e719b3a895 CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=tcp://10.98.229.251:80 CATTLE_CLUSTER_AGENT_PORT_443_TCP=tcp://10.98.229.251:443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=10.98.229.251 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=tcp://10.98.229.251:80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=10.98.229.251 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=10.98.229.251 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 
CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=23fc6a1d-9991-4362-a68e-a1ee0ea81e69 CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-5975998d67-57k9c CATTLE_SERVER=https://<servername> CATTLE_SERVER_VERSION=v2.6.5 INFO: Using resolv.conf: search cattle-system.svc.cluster.local svc.cluster.local cluster.local nameserver 10.96.0.10 options ndots:5 INFO: https://<servername>/ping is accessible INFO: <servername> resolves to 10.146.68.37 INFO: Value from https://<servername>/v3/settings/cacerts is an x509 certificate time="2022-05-22T13:19:14Z" level=info msg="Listening on /tmp/log.sock" time="2022-05-22T13:19:14Z" level=info 
msg="Rancher agent version v2.6.5 is starting" time="2022-05-22T13:19:14Z" level=fatal msg="looking up cattle-system/cattle ca/token: no secret exists for service account cattle-system/cattle

The reason of that is because Kubernetes version 1.24, the kube-controller-manager feature gate LegacyServiceAccountTokenNoAutoGeneration is enabled by default, which you can see in the following documentation:

As a workaround, you can disable this feature gate.

Open the /etc/kubernetes/manifest/kube-controller-manager.yml(or automate that) on the master/controllers nodes,into spec.container.command, the kube-controller-manager pod will automatically restart with the new changes.

- --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false

Example:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/ssl/ca.crt
    - --cluster-cidr=10.233.64.0/18
    - --cluster-name=k8s.dev.cluster.local
    - --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key
    - --configure-cloud-routes=false
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --leader-elect-lease-duration=15s
    - --leader-elect-renew-deadline=10s
    - --node-cidr-mask-size=24
    - --node-monitor-grace-period=40s
    - --node-monitor-period=5s
    - --profiling=False
    - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/ssl/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/ssl/sa.key
    - --service-cluster-ip-range=10.233.0.0/18
    - --terminated-pod-gc-threshold=12500
    - --use-service-account-credentials=true
    - --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false
...
...    

The second workaround described here.

For more:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment