Skip to content

Instantly share code, notes, and snippets.

@whitlockjc
Last active August 18, 2016 22:34
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save whitlockjc/f267958109e63ac0c952aa4feb6d57d9 to your computer and use it in GitHub Desktop.
Save whitlockjc/f267958109e63ac0c952aa4feb6d57d9 to your computer and use it in GitHub Desktop.
Proof of a post I'm writing for the Apigee blog on using CloudFoundry's UAA for Kubernetes authentication via OIDC.

Kubernetes Authentication with UAA

Recently at Apigee we have started using Kubernetes and while working on securing access to it, we learned a few things that we felt could be useful to other Kubernetes consumers. This post will discuss how we were able to use CloudFoundry's UAA as an OpenID Connect Provider for Kubernetes authentication. If you are not using UAA but you are using an OAuth2 provider for your authentication needs, stick around because this post could be useful to you as well.

Note: This post provides background on the process we took and how we successfully wired things up. If you do not care about this and just want to know the steps required to use UAA, and possibly other OAuth 2.0 providers, as an OIDC provider for Kubernetes, please skip to the Cliffs Notes section.

Kubernetes Authentication

When starting down the path of securing access to Kubernetes, our ultimate goal was to use our existing single-sign-on solution. Upon evaluating the different options Kubernetes offers for cluster authentication, there was only one option that seemed plausible and that was the OpenID Connect (OIDC) authentication provider. OIDC is basically a "simple identity layer on top of the OAuth 2.0 protocol" and it just so happens that UAA is an OAuth 2.0 provider with limited OIDC support. Even with only limited OIDC support, this seemed like as good a place as to start.

Our first step was to see how far we could get by configuring Kubernetes to use our UAA server as an OIDC provider. To do this we passed the following command-line options to the Kubernetes API Server (based on the OpenID Connect Tokens section of the Kubernetes Authentication guide):

  • --oidc-issuer-url: This tells Kubernetes where your OIDC server is
  • --oidc-client-id: This tells Kubernetes the OAuth client application to use

But during startup, the API Server failed to start and we saw errors like this: Failed to fetch provider config, trying again in 3s: invalid character '<' looking for beginning of value. After some digging in the Kubernetes sources, and the go-oidc sources, we found out that upon start, the Kubernetes API Server expects to find a document located at $OIDC_ISSUER_URL/.well-known/openid-configuration. What kind of file is this and what are its contents? After some Googling around, we found out this document is an OpenID Provider Metadata document and it is used as part of OpenID Connect Discovery, something UAA itself does not support.

OpenID Connect Discovery

Instead of giving up, we decided to look into what it takes to implement OIDC Discovery starting with the $OIDC_ISSUER_URL/.well-known/openid-configuration. Reading the Obtaining OpenID Provider Configuration Information portion of the OIDC Discovery specification, we learned that $OIDC_ISSUER_URL/.well-known/openid-configuration is used by OIDC clients to obtain the OpenID Provider Configuration. So now that we understand the structure of the URL that Kubernetes was looking for, we now need to understand the structure of this document.

As expected, the OIDC Discovery specification explains this in the OpenID Provider Metadata section. Since this post is not an OpenID Connect tutorial, I will instead point you to a few public OpenID Providers for reference:

Based on the OIDC Discovery specification and the various examples we found online from public OpenID Connect Providers, we felt confident that we could create an OpenID Provider Metadata document for our UAA server and that was our next step.

Note: Since UAA does not support OpenID Connect Discovery, we had to serve the OpenID Provider Metadata document ourselves so you will likely need to solve this as well.

JSON Web Tokens and Signing

Once we had our OpenID Provider Metadata document served at $OIDC_ISSUER_URL/.well-known/openid-configuration we restarted the API Server and this time, there were no errors related to OIDC and the API Server started successfully. The next step was to get a token and attempt to authenticate to Kubernetes using said token. Of course, depending on your environment how you get your token will change but for UAA users, you could use the uaac to do this like so:

# Set the uaac target (The UAA server location)
uaac target $OIDC_ISSUER_URL

# Get a user token from UAA
uaac token authcode get

# Print the uaac contexts
uaac contexts

The last command will print out your uaac contexts and one should match your target server. Once you find it, the token you need is the access_token property.

Once we have the token from UAA, we need to create/update our Kubernetes client (kubectl) context to contain our newly-retrieved token like so:

# Create a new kubectl cluster configuration
kubectl config set-cluster $CLUSTER_NAME --server=$K8S_SERVER_URL --certificate-authority=$K8S_CA_CRT

# Configure a context user (This user IS NOT the username used to authenticate to Kubernetes, that is in your token)
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER_NAME

# Configure the context user to use the token we just retrieved
kubectl config set-credentials $USER_NAME --token=$TOKEN

# Configure kubectl to use the context we just created
kubectl config use-context $CONTEXT_NAME

Here is an example:

Note: It is only a coincidence that we use kube-solo-secure for all names in the examples below. That is not a requirement and is done purely to make cleaning things up simpler.

kubectl config set-cluster kube-solo-secure --server=kube-solo-secure --certificate-authority=/tmp/ca.crt --embed-certs

kubectl config set-context kube-solo-secure --cluster=kube-solo-secure --user=kube-solo-secure

kubectl config set-credentials kube-solo-secure --token="$TOKEN"

kubectl config use-context kube-solo-secure

Each of these commands above should have output a value of [cluster|context|user] "kube-solo-secure" set., except for the kubectl config use-context command which should have output switched to context "kube-solo-secure".. Once this was done, we were ready to see how much further this got us so we ran kubectl get pods and unfortunately, we got this error: error: you must be logged in to the server (the server has asked for the client to provide credentials) Looking into the API Server logs we saw the following error: Unable to authenticate the request due to an error: [oidc: failed syncing KeySet: illegal base64 data at input byte 19, crypto/rsa: verification error] After a great deal of research and digging around, we found out that JSON Web Keys document, whose location is set via the jwks_uri in the OpenID Provider Metadata document, was invalid and that's when we ran into our first incompatibility with UAA's OIDC support.

UAA's Incompatibility with OIDC

JSON Web Keys (JWS) are used to verify JSON Web Tokens (JWT) and the structure of a JWS mandates that the modulus used to verify signatures is to be base64url encoded but the modules (the n property of the JWS provided by UAA) was only base64 encoded. So UAA is not encoding their JWS appropriately per the JWS specification. This led to us filing a bug and coming up with a workaround. Much like the need for us to host our own /.well-known/openid-configuration document alongside UAA, we also created a new version of the JWS file (/token_keys) at (/k8s_token_keys) and updated our OpenID Provider Metadata document to have the jwks_uri use the new document.

After this was up, we re-ran kubectl get pods and this time we got another error: JWT claims invalid: invalid claim value: 'iss'. expected=$ISSUER_URL, found=$ISSUER_URL/oauth/token., crypto/rsa: verification error (notice the extra /oauth/token)

Note: At this point I would like to point out that while progress is being made, we were beginning to think we would continue down this rabbit hole forever.

The good news was this error was easy to understand and that is based on the OpenID Provider Metadata documentation, the iss claim value MUST MATCH the issuer value of the OpenID Provider Metadata document. Unfortunately, this is not something you can toggle within UAA which led to a pull request. The purpose of this PR was to get the ball rolling on fixing this officially and that PR contains the exact changes we made to our custom UAA server to fix this. Once we deployed the new version of UAA with the PR changes made, lo and behold kubectl get pods worked as expected.

Alternatives

We realize that the information above might not sit well with some. Building a custom version of UAA to help it implement OIDC just for Kubernetes authentication might seem like a bit much, not to mention that that alone is just one of a handful of steps that workaround UAA's lack of OIDC support. If that is the case, you have two options:

  1. Wait until UAA officially supports OIDC
  2. Use dex's UAA support

Cliffs Notes

The explanation above discusses how we got to a working deployment of UAA being used for Kubernetes authentication via OIDC. To summarize the things that were required, here is a bulleted list of the steps:

  1. Patch UAA (Using this PR: cloudfoundry/uaa#425) and rebuild to avoid /oauth/token from being appended to your iss claim
  2. Create a version of $UAA_SERVER/token_keys that has the n properties base64url encoded instead of just base64 encoded
  3. Create an OpenID Provider Metadata document based on the OpenID Provider Metadata and/or the examples linked to above
  4. Serve your OpenID Provider Metadata and JWS documents at $UAA_SERVER/.well-known/openid-configuration and $UAA_SERVER/k8s_token_keys respectively (The latter URL is just an example and you can use whatever path you want so long as it matches the jwks_uri property in your OpenID Provider Metadata document)
  5. Create an OAuth client application in UAA that has the appropriate scope (openid)
  6. Update the Kubernetes API Server options to have the --oidc-issuer-url option set to the $UAA_SERVER portion of the URLs mentioned in steps 2 and 4
  7. Update the Kubernetes API Server options to have the --oidc-client-id option set to the UAA OAuth client application created in step 5
  8. Update the Kubernetes API Server options for OIDC as you need not related to steps 6 and 7

Conclusion

In the end, our goals were met and we were able to successfully use our single-sign-on solution for Kubernetes authentication. While it would be ideal if UAA supported OIDC and we could just point Kubernetes to UAA and call it good, he steps above are easy to repeat, safe and have allowed us to get what we need right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment