Skip to content

Instantly share code, notes, and snippets.

@timroster
Last active August 27, 2021 18:23
Show Gist options
  • Save timroster/8068753a890b8b12f26c910c36ef549b to your computer and use it in GitHub Desktop.
Save timroster/8068753a890b8b12f26c910c36ef549b to your computer and use it in GitHub Desktop.
Scenarios to configure TLS on Cloud Pak for Data running on Red Hat OpenShift on IBM Cloud

Configuring TLS with Cloud Pak for Data on Red Hat OpenShift on IBM Cloud

Red Hat OpenShift provides three ways, through the route resouce, to create TLS secured connections to applications running on Kubernetes.

  1. edge - that works like Kubernetes ingress, taking certificate data (certificate + key) to terminate TLS at the pod of the ingress controller, the openshift-router (haproxy-based) and then forward traffic to a Service in the cluster.
  2. reencrypt - that uses a public certificate, like edge to terminate TLS and then use (potentially private) certificate to build a new TLS session to the pods behind the Service.
  3. passthrough - where the ingress controller pod inspects the SNI of a TLS connection, uses this to determine the target host and forwards the traffic directly to the pods behind the Service.

In both cases 2 & 3, the pod is expected to be listening on a port for secured https sessions using some certificate data. By default, when Cloud Pak for Data (CPD) installs, the ibm-nginx pod will be configured with a secret with certificate data that is not signed by a public CA and unique to the deployment. And also by default, the web UI for Cloud Pak for Data will be configured with a passthrough route meaning that accesses to it will result in certificate warnings.

Since Red Hat OpenShift on IBM Cloud (hereafter, ROKS) includes an automatically managed TLS certificate signed by Let's Encrypt for it's own use on the ingress domain, and thereby used to secure traffic to the OpenShift Console and also OpenShift Monitoring (Grafana) dashboards - it's completely possible to also use this same certificate for Cloud Pak for Data. Since 4.5, the ROKS ingress certificate data is managed in an IBM Certificate Manager instance that is created at the same time as the OpenShift cluster. A secret in the openshift-ingress project with the name corresponding to the host part of the ingress domain for the cluster of type kubernetes.io/tls contains this certificate data. A controller in the cluster uses annotations in this secret to support automatic syncronization as the certificate is renewed by IBM Certificate Manager (Let's Encrypt certificates expire every 90 days).

The Cloud Pak for Data documentation recommends that in order to use custom certificate data (which would include the secret used in the OpenShift default ingress domain) that the certificate data be placed in an Opaque secret called external-tls-secret located in the namespace of the CPD installation. Once added, a reload script is run on the pods providing the entry point for the Web UI. The documentation provides examples which work with separate certificate data, but in order to use the certificate from IBM Certificate Manager, a slightly different procedure can work that will be more aligned to automatic renewal. There are two steps, one which is run a single time, and then a step which whould be run on a periodic basis to propagate renewing certificate data.

  1. Use the IBM Cloud CLI to add the certificate data for the ingress domain to the CPD namespace:

    ibmcloud ks ingress secret create --cert-crn ${CERTIFICATE_CRN} --cluster ${CLUSTER_NAME} --name ibmcertman-tls-secret --namespace ${CPD_NAMESPACE}

    where CERTIFICATE_CRN is the CRN of the certificate in the IBM Certificate Manager service, CLUSTER_NAME is the name of the ROKS cluster and CPD_NAMESPACE is the installation namespace of CPD. The annotations added by the ibmcloud ks ingress secret create will ensure that as the certificate data is rotated, it will always be up-to-date.

  2. The second set of commands should be run on a periodic basis from the CPD namespace, probably best as a Job under a service account with sufficient privledge through rbac:

    # patch certificate from cert manager to work with cpd
    oc get secret ibmcertman-tls-secret -o yaml | sed 's/^  tls\./  cert./' | sed 's/ibmcertman-tls-secret/external-tls-secret/' | sed 's/kubernetes.io\/tls/Opaque/' | oc create -f -
    oc patch secret external-tls-secret --type='json' -p='[{"op": "remove", "path": "/metadata/annotations"}]'
    
    # reload ibm-nginx
    for i in `oc get pods | grep ibm-nginx |  cut -f1 -d\ `; do oc exec ${i} -- /scripts/reload.sh; done

Once complete, this will have the effect of having the application pods for CPD use the same ingress certficate data as the default domain for the cluster and accessing the default passthrough route will now result in a CA-signed certificate being presented. However, it can be reasonably argued that needing to periodically convert the secrets from tls to opaque and run the reload scripts in the ibm-nginx pods is cumbersome and potentially prone to issues.

An alternative approach is to update the default passthrough route provided by the CPD installer to be a reencrypt route using a simple script run after installing CPD. This script is attached to this gist and also copied here. It is invoked with a single argument - the namespace of the CPD installation. It copies the self-signed certificate from the ibm-nginx pod and uses it as the destinationCACertificate in the reencrypt route.

#!/bin/bash
NAMESPACE=$1
nginxpod=$(oc -n $NAMESPACE get pod -l component=ibm-nginx -o jsonpath='{.items[0].metadata.name}')
oc -n $NAMESPACE cp $nginxpod:/etc/nginx/config/ssl/cert.crt /tmp/cert.crt || exit 1
oc -n $NAMESPACE delete route ${NAMESPACE}-cpd
oc -n $NAMESPACE create route reencrypt ${NAMESPACE}-cpd --service=ibm-nginx-svc --port=ibm-nginx-https-port --dest-ca-cert=/tmp/cert.crt
rm /tmp/cert.crt

It seems a lot more simple to just:

chmod +x reencrypt_route.sh
./reencrypt_route.sh ${CPD_NAMESPACE}

Than use the more complicated steps above. However, customers who are sensitive to potential issues in multi-tenant clusters posed by not having TLS all the way down to the ibm-nginx pod would tend to use the former and not the latter approach with the reencrypt route.

#!/bin/bash
NAMESPACE=$1
nginxpod=$(oc -n $NAMESPACE get pod -l component=ibm-nginx -o jsonpath='{.items[0].metadata.name}')
oc -n $NAMESPACE cp $nginxpod:/etc/nginx/config/ssl/cert.crt /tmp/cert.crt || exit 1
oc -n $NAMESPACE delete route ${NAMESPACE}-cpd
oc -n $NAMESPACE create route reencrypt ${NAMESPACE}-cpd --service=ibm-nginx-svc --port=ibm-nginx-https-port --dest-ca-cert=/tmp/cert.crt
rm /tmp/cert.crt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment