Skip to content

Instantly share code, notes, and snippets.

@abutcher
Last active December 5, 2022 21:49
Show Gist options
  • Save abutcher/2a92d678a6da98d5c98a188aededab69 to your computer and use it in GitHub Desktop.
Save abutcher/2a92d678a6da98d5c98a188aededab69 to your computer and use it in GitHub Desktop.

Proof of Concept: Azure Managed Identity && Federated Credentials for OpenShift Operators

In this proof of concept, we will configure an Azure based OpenShift cluster to generate bound service account tokens that can be trusted by and authenticate to Azure API services.

To begin, we need an existing Azure based OpenShift cluster.

We will then,

  • Extract the cluster's ServiceAccount public signing key which will be used to generate OIDC discovery and JSON Web Key Set (JWKS) documents.
  • Create an Azure blob storage container and upload the OIDC discovery and JWKS documents.
  • Configure cluster authentication with a serviceAccountIssuer of the publically available Azure blob container endpoint URL.
  • Create a User-Assigned Managed Identity (MI) for the cluster-ingress-operator.
  • Create a Federated Identity Credential within the Managed Identity for the cluster-ingress-operator's service account.
  • Modify the ingress operator's credential secret to contain the clientID of the User-Assigned Managed Identity.
  • Deploy a version of the cluster-ingress-operator which has been configured to authenticate with the mounted bound service service account token.
  • Demonstrate that the cluster-ingress-operator is able to recreate the wildcard A record in Azure DNS, authenticating with a Federated Identity Credential.

Managed Identity Infrastructure

A recent version of the Azure CLI (>= 2.42.0), azwi (>= v0.11.0) and jq tools are needed for subsequent steps.

Note that many of the steps below would be automated by ccoctl in a CCO implementation.

Create OIDC Provider

  1. Set up environment variables

    We can either use the resource group in which the OpenShift cluster infrastructure already resides or create a new resource group. Either way, configure the resource group in which we will create the Managed Identity infrastructure.

    export RESOURCE_GROUP="< YOUR RESOURCE GROUP >"
    export LOCATION="centralus"
    export AZURE_STORAGE_ACCOUNT="oidcissuer$(openssl rand -hex 4)"
    export AZURE_STORAGE_CONTAINER="oidc-test"
  2. Extract the cluster's ServiceAccount public signing key to ./serviceaccount-signer.public

    oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs \
                     --output json | jq --raw-output '.data["service-account-001.pub"]' > serviceaccount-signer.public
  3. Generate the OIDC discovery document ./openid-configuration.json

    cat <<EOF > openid-configuration.json
    {
      "issuer": "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/",
      "jwks_uri": "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/openid/v1/jwks",
      "response_types_supported": [
        "id_token"
      ],
      "subject_types_supported": [
        "public"
      ],
      "id_token_signing_alg_values_supported": [
        "RS256"
      ]
    }
    EOF
  4. Generate the JWKS document ./jwks.json

    azwi jwks --public-keys ./serviceaccount-signer.public --output-file jwks.json
  5. Create an Azure blob storage account and container

    az storage account create --resource-group "${RESOURCE_GROUP}" --name "${AZURE_STORAGE_ACCOUNT}"
    az storage container create --name "${AZURE_STORAGE_CONTAINER}" --public-access container
  6. Upload the OIDC discovery document to the Azure blob container

    az storage blob upload \
      --container-name "${AZURE_STORAGE_CONTAINER}" \
      --file openid-configuration.json \
      --name .well-known/openid-configuration
  7. Verify that the discovery document is publicly accessible

    curl -s "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/.well-known/openid-configuration"
  8. Upload the JWKS document to the Azure blob container

    az storage blob upload \
       --container-name "${AZURE_STORAGE_CONTAINER}" \
       --file jwks.json \
       --name openid/v1/jwks
  9. Verify that the JWKS document is publicly accessible

    curl -s "https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/openid/v1/jwks"
  10. Configure the cluster's Authentication Custom Resource spec.serviceAccountIssuer field to contain the URL of the Azure blob OIDC endpoint

    oc patch authentication/cluster \
       --type=json -p '[{"op":"replace","path":"/spec/serviceAccountIssuer","value":"'"https://${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/${AZURE_STORAGE_CONTAINER}/"'"}]'
  11. Wait for kube-apiserver pods to be updated with the new configuration. This can take several minutes

    watch "oc get pods -n openshift-kube-apiserver | grep kube-apiserver"

Create Managed Identity and Federated Identity Credential

  1. Set up environment variables

    export USER_ASSIGNED_IDENTITY="openshift-ingress-mi"
    export FEDERATED_CREDENTIAL="openshift-ingress"
    export ISSUER="$(cat openid-configuration.json | jq -r '.issuer')"
    export SUBSCRIPTION_ID="< YOUR SUBSCRIPTION ID >"
    
    # Variables exported previously
    export RESOURCE_GROUP="< YOUR RESOURCE GROUP >"
    export LOCATION="centralus"
  2. Create User-Assigned Managed Identity

    A User-Assigned Managed Identity will be created for each operator. For the purposes of this POC, we will create a managed identity for the cluster-ingress-operator.

    az identity create --name "${USER_ASSIGNED_IDENTITY}" \
       --resource-group "${RESOURCE_GROUP}" \
       --location "${LOCATION}"
  3. Create a Federated Identity Credential

    The Federated Identity Credential ties together the User-Assigned Managed Identity, OIDC Azure blob endpoint and the operator's service account.

    az identity federated-credential create \
       --identity-name "${USER_ASSIGNED_IDENTITY}" \
       --name "${FEDERATED_CREDENTIAL}" \
       --resource-group "${RESOURCE_GROUP}" \
       --audiences "openshift" \
       --issuer "${ISSUER}" \
       --subject "system:serviceaccount:openshift-ingress-operator:ingress-operator"
  4. Assign the User-Assigned Managed Identity the Contributor Role within the scope of the subscription

    export PRINCIPAL_ID="$(az identity show --name "$USER_ASSIGNED_IDENTITY" --resource-group "$RESOURCE_GROUP" | jq -r .principalId)"
    az role assignment create --assignee "${PRINCIPAL_ID}" --role 'Contributor' --scope "/subscriptions/${SUBSCRIPTION_ID}"

Deploy the Modified cluster-ingress-operator

In this section, we will deploy a version of the cluster-ingress-operator which has been updated to authenticate via client assertion credential using the bound service account token mounted in the ingress-operator pod. Modifications made to the operator can be found here and are based on openshift/cluster-ingress-operator/pull/846.

  1. Set up environment variables

     # Variables exported previously
    export USER_ASSIGNED_IDENTITY="openshift-ingress-mi"
    export RESOURCE_GROUP="< YOUR RESOURCE GROUP >"
  2. Scale down the Cluster Version Operator to avoid the CVO replacing our modified cluster-ingress-operator deployment

    oc scale --replicas 0 -n openshift-cluster-version deployments/cluster-version-operator
  3. Scale down the Cloud Credential Operator to avoid CCO replacing our modified credentials secret

    oc scale --replicas 0 -n openshift-cloud-credential-operator deployments/cloud-credential-operator
  4. Modify the cluster-ingress-operator's existing credential secret to contain the client ID of the User-Assigned Managed Identity as well as to remove the existing client secret. The absence of the client secret indicates that the operator should authenticate via ClientAssertionCredential for Workload Identity

    export ENCODED_CLIENT_ID="$(az identity show --name "$USER_ASSIGNED_IDENTITY" --resource-group "$RESOURCE_GROUP" | jq -r .clientId | base64)"
    oc patch secret cloud-credentials -n openshift-ingress-operator \
       --type=json -p '[{"op":"replace","path":"/data/azure_client_id","value":"'"$ENCODED_CLIENT_ID"'"},{"op":"remove","path":"/data/azure_client_secret"}]'
  5. Deploy the modified version of the cluster-ingress-operator

    oc apply -f https://gist.githubusercontent.com/abutcher/fb27f879ce17a7a76f4e0ff2a16d4796/raw/95cf91ffa1e4dc6d2326a5756557e425919e4186/02-deployment.yaml
  6. Monitor the logs of the ingress-operator deployment to ensure the container starts and is able to authenticate with Azure services

    oc logs deployment/ingress-operator -n openshift-ingress-operator -f
  7. Delete the default-wildcard dnsrecord and monitor the ingress-operator deployment logs for reconciliation.

    oc delete dnsrecord default-wildcard -n openshift-ingress-operator

    Additionally, we can delete the A record for the default-wildcard within the Azure console, force reconciliation by deleting the default-wildcard dnsrecord once more and ensure that the A record is recreated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment