In this proof of concept, we will configure an Azure based OpenShift cluster to generate bound service account tokens that can be trusted by and authenticate to Azure API services.
To begin, we need an existing Azure based OpenShift cluster.
We will then,
- Extract the cluster's ServiceAccount public signing key which will be used to generate OIDC discovery and JSON Web Key Set (JWKS) documents.
- Create an Azure blob storage container and upload the OIDC discovery and JWKS documents.
- Configure cluster authentication with a
serviceAccountIssuer
of the publically available Azure blob container endpoint URL. - Create a User-Assigned Managed Identity (MI) for the cluster-ingress-operator.
- Create a Federated Identity Credential within the Managed Identity for the cluster-ingress-operator's service account.
- Modify the ingress operator's credential secret to contain the
clientID
of the User-Assigned Managed Identity. - Deploy a version of the cluster-ingress-operator which has been configured to authenticate with the mounted bound service service account token.
- Demonstrate that the cluster-ingress-operator is able to recreate the wildcard A record in Azure DNS, authenticating with a Federated Identity Credential.
A recent version of the Azure CLI (>= 2.42.0), azwi (>= v0.11.0) and jq tools are needed for subsequent steps.
Note that many of the steps below would be automated by ccoctl in a CCO implementation.
-
Set up environment variables
We can either use the resource group in which the OpenShift cluster infrastructure already resides or create a new resource group. Either way, configure the resource group in which we will create the Managed Identity infrastructure.
export RESOURCE_GROUP="< YOUR RESOURCE GROUP >" export LOCATION="centralus" export AZURE_STORAGE_ACCOUNT="oidcissuer$(openssl rand -hex 4)" export AZURE_STORAGE_CONTAINER="oidc-test"
-
Extract the cluster's ServiceAccount public signing key to
./serviceaccount-signer.public
oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output json | jq --raw-output '.data["service-account-001.pub"]' > serviceaccount-signer.public
-
Generate the OIDC discovery document
./openid-configuration.json
cat <<EOF > openid-configuration.json { "issuer": "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/", "jwks_uri": "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/openid/v1/jwks", "response_types_supported": [ "id_token" ], "subject_types_supported": [ "public" ], "id_token_signing_alg_values_supported": [ "RS256" ] } EOF
-
Generate the JWKS document
./jwks.json
azwi jwks --public-keys ./serviceaccount-signer.public --output-file jwks.json
-
Create an Azure blob storage account and container
az storage account create --resource-group "${RESOURCE_GROUP}" --name "${AZURE_STORAGE_ACCOUNT}" az storage container create --name "${AZURE_STORAGE_CONTAINER}" --public-access container
-
Upload the OIDC discovery document to the Azure blob container
az storage blob upload \ --container-name "${AZURE_STORAGE_CONTAINER}" \ --file openid-configuration.json \ --name .well-known/openid-configuration
-
Verify that the discovery document is publicly accessible
curl -s "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/.well-known/openid-configuration"
-
Upload the JWKS document to the Azure blob container
az storage blob upload \ --container-name "${AZURE_STORAGE_CONTAINER}" \ --file jwks.json \ --name openid/v1/jwks
-
Verify that the JWKS document is publicly accessible
curl -s "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/openid/v1/jwks"
-
Configure the cluster's Authentication Custom Resource
spec.serviceAccountIssuer
field to contain the URL of the Azure blob OIDC endpointoc patch authentication/cluster \ --type=json -p '[{"op":"replace","path":"/spec/serviceAccountIssuer","value":"'"https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/"'"}]'
-
Wait for kube-apiserver pods to be updated with the new configuration. This can take several minutes
watch "oc get pods -n openshift-kube-apiserver | grep kube-apiserver"
-
Set up environment variables
export USER_ASSIGNED_IDENTITY="openshift-ingress-mi" export FEDERATED_CREDENTIAL="openshift-ingress" export ISSUER="$(cat openid-configuration.json | jq -r '.issuer')" export SUBSCRIPTION_ID="< YOUR SUBSCRIPTION ID >" # Variables exported previously export RESOURCE_GROUP="< YOUR RESOURCE GROUP >" export LOCATION="centralus"
-
Create User-Assigned Managed Identity
A User-Assigned Managed Identity will be created for each operator. For the purposes of this POC, we will create a managed identity for the cluster-ingress-operator.
az identity create --name "${USER_ASSIGNED_IDENTITY}" \ --resource-group "${RESOURCE_GROUP}" \ --location "${LOCATION}"
-
Create a Federated Identity Credential
The Federated Identity Credential ties together the User-Assigned Managed Identity, OIDC Azure blob endpoint and the operator's service account.
az identity federated-credential create \ --identity-name "${USER_ASSIGNED_IDENTITY}" \ --name "${FEDERATED_CREDENTIAL}" \ --resource-group "${RESOURCE_GROUP}" \ --audiences "openshift" \ --issuer "${ISSUER}" \ --subject "system:serviceaccount:openshift-ingress-operator:ingress-operator"
-
Assign the User-Assigned Managed Identity the Contributor Role within the scope of the subscription
export PRINCIPAL_ID="$(az identity show --name "$USER_ASSIGNED_IDENTITY" --resource-group "$RESOURCE_GROUP" | jq -r .principalId)" az role assignment create --assignee "${PRINCIPAL_ID}" --role 'Contributor' --scope "/subscriptions/${SUBSCRIPTION_ID}"
In this section, we will deploy a version of the cluster-ingress-operator which has been updated to authenticate via client assertion credential using the bound service account token mounted in the ingress-operator pod. Modifications made to the operator can be found here and are based on openshift/cluster-ingress-operator/pull/846.
-
Set up environment variables
# Variables exported previously export USER_ASSIGNED_IDENTITY="openshift-ingress-mi" export RESOURCE_GROUP="< YOUR RESOURCE GROUP >"
-
Scale down the Cluster Version Operator to avoid the CVO replacing our modified cluster-ingress-operator deployment
oc scale --replicas 0 -n openshift-cluster-version deployments/cluster-version-operator
-
Scale down the Cloud Credential Operator to avoid CCO replacing our modified credentials secret
oc scale --replicas 0 -n openshift-cloud-credential-operator deployments/cloud-credential-operator
-
Modify the cluster-ingress-operator's existing credential secret to contain the client ID of the User-Assigned Managed Identity as well as to remove the existing client secret. The absence of the client secret indicates that the operator should authenticate via
ClientAssertionCredential
for Workload Identityexport ENCODED_CLIENT_ID="$(az identity show --name "$USER_ASSIGNED_IDENTITY" --resource-group "$RESOURCE_GROUP" | jq -r .clientId | base64)" oc patch secret cloud-credentials -n openshift-ingress-operator \ --type=json -p '[{"op":"replace","path":"/data/azure_client_id","value":"'"$ENCODED_CLIENT_ID"'"},{"op":"remove","path":"/data/azure_client_secret"}]'
-
Deploy the modified version of the cluster-ingress-operator
oc apply -f https://gist.githubusercontent.com/abutcher/fb27f879ce17a7a76f4e0ff2a16d4796/raw/95cf91ffa1e4dc6d2326a5756557e425919e4186/02-deployment.yaml
-
Monitor the logs of the ingress-operator deployment to ensure the container starts and is able to authenticate with Azure services
oc logs deployment/ingress-operator -n openshift-ingress-operator -f
-
Delete the default-wildcard dnsrecord and monitor the ingress-operator deployment logs for reconciliation.
oc delete dnsrecord default-wildcard -n openshift-ingress-operator
Additionally, we can delete the A record for the default-wildcard within the Azure console, force reconciliation by deleting the default-wildcard dnsrecord once more and ensure that the A record is recreated.