Script for regenerating the kubeconfig for system:admin user
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
AUTH_NAME="auth2kube" | |
NEW_KUBECONFIG="newkubeconfig" | |
echo "create a certificate request for system:admin user" | |
openssl req -new -newkey rsa:4096 -nodes -keyout $AUTH_NAME.key -out $AUTH_NAME.csr -subj "/CN=system:admin" | |
echo "create signing request resource definition" | |
oc delete csr $AUTH_NAME-access # Delete old csr with the same name | |
cat << EOF >> $AUTH_NAME-csr.yaml | |
apiVersion: certificates.k8s.io/v1 | |
kind: CertificateSigningRequest | |
metadata: | |
name: $AUTH_NAME-access | |
spec: | |
signerName: kubernetes.io/kube-apiserver-client | |
groups: | |
- system:authenticated | |
request: $(cat $AUTH_NAME.csr | base64 | tr -d '\n') | |
usages: | |
- client auth | |
EOF | |
oc create -f $AUTH_NAME-csr.yaml | |
echo "approve csr and extract client cert" | |
oc get csr | |
oc adm certificate approve $AUTH_NAME-access | |
oc get csr $AUTH_NAME-access -o jsonpath='{.status.certificate}' | base64 -d > $AUTH_NAME-access.crt | |
echo "add system:admin credentials, context to the kubeconfig" | |
oc config set-credentials system:admin --client-certificate=$AUTH_NAME-access.crt \ | |
--client-key=$AUTH_NAME.key --embed-certs --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "create context for the system:admin" | |
oc config set-context system:admin --cluster=$(oc config view -o jsonpath='{.clusters[0].name}') \ | |
--namespace=default --user=system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "extract certificate authority" | |
oc -n openshift-authentication rsh `oc get pods -n openshift-authentication -o name | head -1` \ | |
cat /run/secrets/kubernetes.io/serviceaccount/ca.crt > ingress-ca.crt | |
echo "set certificate authority data" | |
oc config set-cluster $(oc config view -o jsonpath='{.clusters[0].name}') \ | |
--server=$(oc config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=ingress-ca.crt --kubeconfig=/tmp/$NEW_KUBECONFIG --embed-certs | |
echo "set current context to system:admin" | |
oc config use-context system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "test client certificate authentication with system:admin" | |
export KUBECONFIG=/tmp/$NEW_KUBECONFIG | |
oc login -u system:admin | |
oc get pod -n openshift-console |
That's awesome. Thanks for creating this.
glad that helped @lousyd! :)
Hi Roberto, it worked flawlessly for me yesterday in a 4.8 cluster. Then I tried to set a second kubeconfig file for a different cluster, same OCP, same version etc. But it failed. After several hours, I tried with a fresh terminal, and then it worked. I post this here, just in case it happens to anybody else.
Muchísimas gracias, menuda currada, enhorabuena!
@rodolof Thanks for the information and for the comment! And also happy that this scripts helped :)
Gracias a ti! Un abrazo!!
Updated to support also OpenShift 4.9 version
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Tested in ocp4.4.17 but valid in all ocp4 and ocp3(I guess) clusters: