Skip to content

Instantly share code, notes, and snippets.

@rcarrata
Last active May 21, 2024 16:22
Show Gist options
  • Save rcarrata/016da295c1421cccbfbd66ed9a7922bc to your computer and use it in GitHub Desktop.
Save rcarrata/016da295c1421cccbfbd66ed9a7922bc to your computer and use it in GitHub Desktop.
Script for regenerating the kubeconfig for system:admin user
#!/bin/bash
AUTH_NAME="auth2kube"
NEW_KUBECONFIG="newkubeconfig"
echo "create a certificate request for system:admin user"
openssl req -new -newkey rsa:4096 -nodes -keyout $AUTH_NAME.key -out $AUTH_NAME.csr -subj "/CN=system:admin"
echo "create signing request resource definition"
oc delete csr $AUTH_NAME-access # Delete old csr with the same name
cat << EOF >> $AUTH_NAME-csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: $AUTH_NAME-access
spec:
signerName: kubernetes.io/kube-apiserver-client
groups:
- system:authenticated
request: $(cat $AUTH_NAME.csr | base64 | tr -d '\n')
usages:
- client auth
EOF
oc create -f $AUTH_NAME-csr.yaml
echo "approve csr and extract client cert"
oc get csr
oc adm certificate approve $AUTH_NAME-access
oc get csr $AUTH_NAME-access -o jsonpath='{.status.certificate}' | base64 -d > $AUTH_NAME-access.crt
echo "add system:admin credentials, context to the kubeconfig"
oc config set-credentials system:admin --client-certificate=$AUTH_NAME-access.crt \
--client-key=$AUTH_NAME.key --embed-certs --kubeconfig=/tmp/$NEW_KUBECONFIG
echo "create context for the system:admin"
oc config set-context system:admin --cluster=$(oc config view -o jsonpath='{.clusters[0].name}') \
--namespace=default --user=system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG
echo "extract certificate authority"
oc -n openshift-authentication rsh `oc get pods -n openshift-authentication -o name | head -1` \
cat /run/secrets/kubernetes.io/serviceaccount/ca.crt > ingress-ca.crt
echo "set certificate authority data"
oc config set-cluster $(oc config view -o jsonpath='{.clusters[0].name}') \
--server=$(oc config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=ingress-ca.crt --kubeconfig=/tmp/$NEW_KUBECONFIG --embed-certs
echo "set current context to system:admin"
oc config use-context system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG
echo "test client certificate authentication with system:admin"
export KUBECONFIG=/tmp/$NEW_KUBECONFIG
oc login -u system:admin
oc get pod -n openshift-console
@rcarrata
Copy link
Author

Tested in ocp4.4.17 but valid in all ocp4 and ocp3(I guess) clusters:

# bash -x regenerate-kubeconfig.sh
+ AUTH_NAME=auth2kube
+ NEW_KUBECONFIG=newkubeconfig
+ echo 'create a certificate request for system:admin user'
create a certificate request for system:admin user
+ openssl req -new -newkey rsa:4096 -nodes -keyout auth2kube.key -out auth2kube.csr -subj /CN=system:admin/O=system:masters
Generating a 4096 bit RSA private key
.......................................................++
....................................++
writing new private key to 'auth2kube.key'
-----
+ echo 'create signing request resource definition'
create signing request resource definition
+ oc delete csr auth2kube-access
Error from server (NotFound): certificatesigningrequests.certificates.k8s.io "auth2kube-access" not found
+ cat
++ cat auth2kube.csr
++ base64
++ tr -d '\n'
+ oc create -f auth2kube-csr.yaml
certificatesigningrequest.certificates.k8s.io/auth2kube-access created
+ echo 'approve csr and extract client cert'
approve csr and extract client cert
+ oc get csr
NAME               AGE   REQUESTOR                                            CONDITION
auth2kube-access   1s    admin                                                Pending
dev-aws-926rz      41m   system:serviceaccount:dev-aws:dev-aws-bootstrap-sa   Approved,Issued
+ oc adm certificate approve auth2kube-access
certificatesigningrequest.certificates.k8s.io/auth2kube-access approved
+ oc get csr auth2kube-access -o 'jsonpath={.status.certificate}'
+ base64 -d
+ echo 'add system:admin credentials, context to the kubeconfig'
add system:admin credentials, context to the kubeconfig
+ oc config set-credentials system:admin --client-certificate=auth2kube-access.crt --client-key=auth2kube.key --embed-certs --kubeconfig=/tmp/newkubeconfig
User "system:admin" set.
+ echo 'create context for the system:admin'
create context for the system:admin
++ oc config view -o 'jsonpath={.clusters[0].name}'
+ oc config set-context system:admin --cluster=api-cluster-1502-1502-sandbox373-opentlc-com:6443 --namespace=default --user=system:admin --kubeconfig=/tmp/newkubeconfig
Context "system:admin" modified.
+ echo 'extract certificate authority'
extract certificate authority
++ oc get pods -n openshift-authentication -o name
++ head -1
+ oc -n openshift-authentication rsh pod/oauth-openshift-68b6886c4f-7m9f5 cat /run/secrets/kubernetes.io/serviceaccount/ca.crt
+ echo 'set certificate authority data'
set certificate authority data
++ oc config view -o 'jsonpath={.clusters[0].name}'
++ oc config view -o 'jsonpath={.clusters[0].cluster.server}'
+ oc config set-cluster api-cluster-1502-1502-sandbox373-opentlc-com:6443 --server=https://api.cluster-1502.1502.sandbox373.opentlc.com:6443 --certificate-authority=ingress-ca.crt --kubeconfig=/tmp/newkubeconfig --embed-certs
Cluster "api-cluster-1502-1502-sandbox373-opentlc-com:6443" set.
+ echo 'set current context to system:admin'
set current context to system:admin
+ oc config use-context system:admin --kubeconfig=/tmp/newkubeconfig
Switched to context "system:admin".
+ echo 'test client certificate authentication with system:admin'
test client certificate authentication with system:admin
+ export KUBECONFIG=/tmp/newkubeconfig
+ KUBECONFIG=/tmp/newkubeconfig
+ oc login -u system:admin
Logged into "https://api.cluster-1502.1502.sandbox373.opentlc.com:6443" as "system:admin" using existing credentials.

You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".
+ oc get pod -n openshift-console
NAME                        READY   STATUS    RESTARTS   AGE
console-56d5945c85-q7rjp    1/1     Running   0          2d3h
console-56d5945c85-x5669    1/1     Running   0          2d3h
downloads-9cb4cd587-dqdsr   1/1     Running   0          2d3h
downloads-9cb4cd587-m6pzz   1/1     Running   0          2d3h

@lousyd
Copy link

lousyd commented Jun 3, 2021

That's awesome. Thanks for creating this.

@rcarrata
Copy link
Author

glad that helped @lousyd! :)

@rodolof
Copy link

rodolof commented Oct 14, 2021

Hi Roberto, it worked flawlessly for me yesterday in a 4.8 cluster. Then I tried to set a second kubeconfig file for a different cluster, same OCP, same version etc. But it failed. After several hours, I tried with a fresh terminal, and then it worked. I post this here, just in case it happens to anybody else.

Muchísimas gracias, menuda currada, enhorabuena!

@rcarrata
Copy link
Author

@rodolof Thanks for the information and for the comment! And also happy that this scripts helped :)

Gracias a ti! Un abrazo!!

@rcarrata
Copy link
Author

Updated to support also OpenShift 4.9 version

@voyasas
Copy link

voyasas commented Mar 13, 2023

does the KUBECONFIG env variable needs to be set before running all these steps? I got the following error without setting KUBECONFIG:

[root@dstrlaae9201 auth]# oc create -f auth2kube-csr.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:

The problem is we don't have a working kubeconfig. Any suggestion?

@rcarrata
Copy link
Author

@voyasas a valid Kubeconfig in .kube/config or specifying the KUBECONFIG env variable needs to be defined in order to perform this series of commands yes.

@worldofgeese
Copy link

@rcarrata these kubeconfig files are still time-bound. Do you have a method of regenerating a kubeconfig that doesn't expire? I lost my original.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment