Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save zaki-lknr/db86d66e9ebc1aa297ab9b50351b3210 to your computer and use it in GitHub Desktop.
Save zaki-lknr/db86d66e9ebc1aa297ab9b50351b3210 to your computer and use it in GitHub Desktop.
$ oc get node
NAME           STATUS     ROLES    AGE     VERSION
okd4-master0   NotReady   master   3d22h   v1.17.1
okd4-master1   NotReady   master   3d22h   v1.17.1
okd4-master2   NotReady   master   3d22h   v1.17.1
okd4-worker0   NotReady   worker   3d22h   v1.17.1
okd4-worker1   NotReady   worker   3d22h   v1.17.1
[zaki@okd4-manager ~]$ ssh core@okd4-master0
Fedora CoreOS 31.20200127.20.1
Tracker: https://github.com/coreos/fedora-coreos-tracker

Last login: Thu Feb 20 04:10:29 2020 from 172.16.0.50
[core@okd4-master0 ~]$ sudo openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | head -12
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            c9:ad:de:2d:fa:6e:be:78:10:4c:55:8a:69:04:7d
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: OU = openshift, CN = kubelet-signer
        Validity
            Not Before: Feb 16 06:13:34 2020 GMT
            Not After : Feb 17 03:26:50 2020 GMT
        Subject: O = system:nodes, CN = system:node:okd4-master0
        Subject Public Key Info:

やってもーたw

こちらの手順で証明書を更新できないか、OKD4.4環境で確認してみる。
Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs – Red Hat OpenShift Blog

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kubelet-bootstrap-cred-manager
  namespace: openshift-machine-config-operator
  labels:
    k8s-app: kubelet-bootrap-cred-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kubelet-bootstrap-cred-manager
  template:
    metadata:
      labels:
        k8s-app: kubelet-bootstrap-cred-manager
    spec:
      containers:
      - name: kubelet-bootstrap-cred-manager
        image: quay.io/openshift/origin-cli:v4.0
        command: ['/bin/bash', '-ec']
        args:
        - |
          #!/bin/bash

          set -eoux pipefail

          while true; do
          unset KUBECONFIG

          echo "----------------------------------------------------------------------"
          echo "Gather info..."
          echo "----------------------------------------------------------------------"
          # context
          intapi=$(oc get infrastructures.config.openshift.io cluster -o "jsonpath={.status.apiServerInternalURI}")
          context="$(oc --config=/etc/kubernetes/kubeconfig config current-context)"
          # cluster
          cluster="$(oc --config=/etc/kubernetes/kubeconfig config view -o "jsonpath={.contexts[?(@.name==\"$context\")].context.cluster}")"
          server="$(oc --config=/etc/kubernetes/kubeconfig config view -o "jsonpath={.clusters[?(@.name==\"$cluster\")].cluster.server}")"
          # token
          ca_crt_data="$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o "jsonpath={.data.ca\.crt}" | base64 --decode)"
          namespace="$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o "jsonpath={.data.namespace}" | base64 --decode)"
          token="$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o "jsonpath={.data.token}" | base64 --decode)"

          echo "----------------------------------------------------------------------"
          echo "Generate kubeconfig"
          echo "----------------------------------------------------------------------"

          export KUBECONFIG="$(mktemp)"
          kubectl config set-credentials "kubelet" --token="$token" >/dev/null
          ca_crt="$(mktemp)"; echo "$ca_crt_data" > $ca_crt
          kubectl config set-cluster $cluster --server="$intapi" --certificate-authority="$ca_crt" --embed-certs >/dev/null
          kubectl config set-context kubelet --cluster="$cluster" --user="kubelet" >/dev/null
          kubectl config use-context kubelet >/dev/null

          echo "----------------------------------------------------------------------"
          echo "Print kubeconfig"
          echo "----------------------------------------------------------------------"
          cat "$KUBECONFIG"

          echo "----------------------------------------------------------------------"
          echo "Whoami?"
          echo "----------------------------------------------------------------------"
          oc whoami
          whoami

          echo "----------------------------------------------------------------------"
          echo "Moving to real kubeconfig"
          echo "----------------------------------------------------------------------"
          cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.prev
          chown root:root ${KUBECONFIG}
          chmod 0644 ${KUBECONFIG}
          mv "${KUBECONFIG}" /etc/kubernetes/kubeconfig

          echo "----------------------------------------------------------------------"
          echo "Sleep 60 seconds..."
          echo "----------------------------------------------------------------------"
          sleep 60
          done
        securityContext:
          privileged: true
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/kubernetes/
          name: kubelet-dir
      nodeSelector:
        node-role.kubernetes.io/master: ""
      priorityClassName: "system-cluster-critical"
      restartPolicy: Always
      securityContext:
        runAsUser: 0
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      - key: "node.kubernetes.io/unreachable"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 120
      - key: "node.kubernetes.io/not-ready"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 120
      volumes:
      - hostPath:
        path: /etc/kubernetes/
        type: Directory
        name: kubelet-dir
[zaki@okd4-manager ~]$ oc apply -f $HOME/kubelet-bootstrap-cred-manager-ds.yaml.yaml
daemonset.apps/kubelet-bootstrap-cred-manager created
[zaki@okd4-manager ~]$ oc delete secrets/csr-signer-signer secrets/csr-signer -n openshift-kube-controller-manager-operator
secret "csr-signer-signer" deleted
secret "csr-signer" deleted
[zaki@okd4-manager ~]$ oc get secret -n openshift-kube-controller-manager-operator 
NAME                                               TYPE                                  DATA   AGE
builder-dockercfg-wjqqk                            kubernetes.io/dockercfg               1      3d22h
builder-token-9pp8w                                kubernetes.io/service-account-token   4      3d22h
builder-token-rrvqg                                kubernetes.io/service-account-token   4      3d22h
default-dockercfg-qtgbw                            kubernetes.io/dockercfg               1      3d22h
default-token-lbvnt                                kubernetes.io/service-account-token   4      3d22h
default-token-s7j7r                                kubernetes.io/service-account-token   4      3d22h
deployer-dockercfg-rrvn8                           kubernetes.io/dockercfg               1      3d22h
deployer-token-h5zdp                               kubernetes.io/service-account-token   4      3d22h
deployer-token-mtbjc                               kubernetes.io/service-account-token   4      3d22h
kube-controller-manager-operator-dockercfg-kfbzd   kubernetes.io/dockercfg               1      3d22h
kube-controller-manager-operator-serving-cert      kubernetes.io/tls                     2      3d22h
kube-controller-manager-operator-token-cztvj       kubernetes.io/service-account-token   4      3d22h
kube-controller-manager-operator-token-hhmkd       kubernetes.io/service-account-token   4      3d22h
next-service-account-private-key                   Opaque                                2      3d22h
[zaki@okd4-manager ~]$
[zaki@okd4-manager ~]$ oc get clusteroperator
NAME                                       VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
cloud-credential                           4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
cluster-autoscaler                         4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
console                                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
csi-snapshot-controller                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
dns                                        4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
image-registry                             4.4.0-0.okd-2020-01-28-022517   True        False         False      3d21h
ingress                                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
insights                                   4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
kube-apiserver                             4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
kube-controller-manager                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
kube-scheduler                             4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
kube-storage-version-migrator              4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
machine-api                                4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
machine-config                             4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
marketplace                                4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
monitoring                                 4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
network                                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
node-tuning                                4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
openshift-apiserver                        4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
openshift-controller-manager               4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
openshift-samples                          4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
operator-lifecycle-manager                 4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
operator-lifecycle-manager-catalog         4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
operator-lifecycle-manager-packageserver   4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
service-ca                                 4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
service-catalog-apiserver                  4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
service-catalog-controller-manager         4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
storage                                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h
support                                    4.4.0-0.okd-2020-01-28-022517   True        False         False      3d22h

ん、、変化があるかわからない。。 まぁいい、やるか

[zaki@okd4-manager ~]$ oc get node
NAME           STATUS     ROLES    AGE     VERSION
okd4-master0   NotReady   master   3d22h   v1.17.1
okd4-master1   NotReady   master   3d22h   v1.17.1
okd4-master2   NotReady   master   3d22h   v1.17.1
okd4-worker0   NotReady   worker   3d22h   v1.17.1
okd4-worker1   NotReady   worker   3d22h   v1.17.1
[zaki@okd4-manager ~]$ for host in $(oc get node --no-headers | awk '{print $1}'); do ssh core@${host} sudo shutdown -h now; done
Connection to okd4-master0 closed by remote host.
Connection to okd4-master1 closed by remote host.
Connection to okd4-master2 closed by remote host.
Connection to okd4-worker0 closed by remote host.
Connection to okd4-worker1 closed by remote host.
[zaki@okd4-manager ~]$ 
[zaki@okd4-manager ~]$ ping okd4-master0
PING okd4-master0 (172.16.0.10) 56(84) bytes of data.
64 bytes from okd4-master0.okd4.naru.jp-z.jp (172.16.0.10): icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from okd4-master0.okd4.naru.jp-z.jp (172.16.0.10): icmp_seq=2 ttl=64 time=0.302 ms

--- okd4-master0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
[zaki@okd4-manager ~]$ oc get node
Unable to connect to the server: EOF

・・・rebootじゃなくてshutdownしてたわ (なんでpingの応答あったんだ)

起動して確認

[zaki@okd4-manager ~]$ oc get node
NAME           STATUS     ROLES    AGE     VERSION
okd4-master0   NotReady   master   3d22h   v1.17.1
okd4-master1   NotReady   master   3d22h   v1.17.1
okd4-master2   NotReady   master   3d22h   v1.17.1
okd4-worker0   NotReady   worker   3d22h   v1.17.1
okd4-worker1   NotReady   worker   3d22h   v1.17.1

ふむ。。

ちょっと様子見てみよう

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment