Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save charltonstanley/ebab66a3d323654d5773d3ec9afda770 to your computer and use it in GitHub Desktop.
Save charltonstanley/ebab66a3d323654d5773d3ec9afda770 to your computer and use it in GitHub Desktop.
An addendum to the A Cloud Guru "Kubernetes the Hard Way" course lesson titled "Setting up the Kubernetes Scheduler". This addendum covers some modifications that need to be made to some of configuration files that are configured when setting up the k8s scheduler, while using a more recent version than specified in the course.

Introduction

This is an addendum to the A Cloud Guru "Kubernetes the Hard Way" course lesson titled "Setting up the Kubernetes Scheduler". This addendum covers some modifications that need to be made to some of commands that need to be executed when setting up the k8s scheduler using a more recent version than specified in the course.

Details

Course: Kubernetes the Hard Way Lesson: 7.6 Lesson Title: Setting Up the Kubernetes Scheduler Binary Version: I was using version 1.23.1 of the kube-apiserver, kube-controller-manager, kube-scheduler, and kubectl binaries.

Steps

In order to get this working I had to edit some of the configuration files.

  1. Make your configuration files match what I have below.

    /var/lib/kubernetes/encryption-config.yaml:

    kind: EncryptionConfig
    apiVersion: v1
    resources:
      - resources:
          - secrets
        providers:
          - aescbc:
              keys:
                - name: key1
                  secret: kLnbAwSiy20XJJX7zXIYvO6W7esSzFmUvwqywCPpckQ=
          - identity: {}

    /etc/systemd/system/kube-apiserver.service:

    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.31.114.47 \
    --allow-privileged=true \
    --apiserver-count=3 \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/audit.log \
    --authorization-mode=Node,RBAC \
    --bind-address=0.0.0.0 \
    --client-ca-file=/var/lib/kubernetes/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
    --enable-swagger-ui=true \
    --etcd-cafile=/var/lib/kubernetes/ca.pem \
    --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
    --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
    --etcd-servers=https://172.31.121.28:2379,https://172.31.114.47:2379 \
    --event-ttl=1h \
    --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
    --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
    --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
    --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
    --runtime-config=api/all=true \
    --service-account-key-file=/var/lib/kubernetes/service-account.pem \
    --service-cluster-ip-range=10.32.0.0/24 \
    --service-node-port-range=30000-32767 \
    --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \
    --service-account-issuer=api \
    --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
    --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
    --v=2 \
    --kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
    Restart=on-failure
    RestartSec=5
    [Install]
    WantedBy=multi-user.target

    /etc/kubernetes/config/kube-scheduler.yaml:

    apiVersion: kubescheduler.config.k8s.io/v1beta3
    kind: KubeSchedulerConfiguration
    clientConnection:
      kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
    leaderElection:
      leaderElect: true
  2. Next, reload the daemon and restart the services.

    `cloud_user@3d344025af1c:~$ sudo systemctl daemon-reload`
    `cloud_user@3d344025af1c:~$ sudo systemctl restart kube-apiserver kube-controller-manager kube-scheduler`
    
  3. Then check the status and ensure all three services are running.

    cloud_user@b045d4d2691c:~$ sudo systemctl status kube-apiserver kube-controller-manager kube-scheduler
    ● kube-apiserver.service - Kubernetes API Server
       Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2022-01-03 20:26:38 UTC; 7s ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 2761 (kube-apiserver)
        Tasks: 11
       Memory: 229.8M
          CPU: 5.694s
       CGroup: /system.slice/kube-apiserver.service
               └─2761 /usr/local/bin/kube-apiserver --advertise-address=172.31.114.47 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --auth
    
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: I0103 20:26:45.121918    2761 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: I0103 20:26:45.222528    2761 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: I0103 20:26:45.320421    2761 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: I0103 20:26:45.423423    2761 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: I0103 20:26:45.521754    2761 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
    Jan 03 20:26:45 b045d4d2691c.mylabserver.com kube-apiserver[2761]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    
    ● kube-controller-manager.service - Kubernetes Controller Manager
       Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2022-01-03 20:26:37 UTC; 8s ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 2750 (kube-controller)
        Tasks: 6
       Memory: 18.1M
          CPU: 1.696s
       CGroup: /system.slice/kube-controller-manager.service
               └─2750 /usr/local/bin/kube-controller-manager --address=0.0.0.0 --cluster-cidr=10.200.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem --cluster-signing-key-file=/var/lib/kubernetes/ca-key
    
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: W0103 20:26:40.046464    2750 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-auth
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: W0103 20:26:40.046512    2750 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: W0103 20:26:40.046536    2750 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.046567    2750 controllermanager.go:196] Version: v1.23.1
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.048473    2750 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1641241598\" [serving] validSe
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.048692    2750 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@16412416
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.048723    2750 secure_serving.go:200] Serving securely on [::]:10257
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.049023    2750 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
    Jan 03 20:26:40 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: I0103 20:26:40.049593    2750 tlsconfig.go:240] "Starting DynamicServingCertificateController"
    Jan 03 20:26:44 b045d4d2691c.mylabserver.com kube-controller-manager[2750]: E0103 20:26:44.079969    2750 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controll
    
    ● kube-scheduler.service - Kubernetes Scheduler
       Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2022-01-03 20:26:37 UTC; 8s ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 2741 (kube-scheduler)
        Tasks: 8
       Memory: 57.1M
          CPU: 1.860s
       CGroup: /system.slice/kube-scheduler.service
               └─2741 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2
    

Credits/Sources

I didn't solve this problem on my own. The following pages contain answers to the various issues I had. Those folks did most of the legwork for me; a big "Thank You" goes to them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment