Skip to content

Instantly share code, notes, and snippets.

@timroster
Last active January 25, 2023 23:12
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save timroster/0d8e089e6e20117c0e019e3f3601fc7b to your computer and use it in GitHub Desktop.
Save timroster/0d8e089e6e20117c0e019e3f3601fc7b to your computer and use it in GitHub Desktop.
Step-by-step - setting up private console endpoints for Red Hat OpenShift on IBM Cloud

Setting up private endpoints and dashboard for VPN to VPC

Begin with a Red Hat OpenShift on IBM Cloud (ROKS) cluster with private endpoints only (public endpoints disabled). Private endpoints resolve in the DNS to IP addresses that are provided by Private Service Endpoints which typically begin with IP address octets of 166. Although the Private Service Endpoints are routable through the implicit internal router that VPC hosts can reach, these addresses are not routed over VPN connections to a VPC. Therefore, in order to manage this cluster from off the VPC, an additional load-balanced service will be added for Kubernetes API access. For more information in the IBM documentation see Accessing VPC clusters through the private service endpoint

The most straightforward way to initially manage a VPC-based ROKS cluster is by adding a Linux VM to one of the subnets where the workers reside (install the ibmcloud and oc cli's to this vm as well as the container-service and infrastructure-service cli plugins). These instructions are written to be run from that VM. To begin, ensure that the RBAC role and permissions of your IAM user are applied to the OpenShift cluster. This can be accomplished by using the web UI to launch a session to the OpenShift console or by performing the ibmcloud oc cluster config command as the user or service ID which will interact with the Cluster API endpoint.

ibmcloud iam api-key-create <name>
ibmcloud login --apikey <API_key>
ibmcloud oc cluster config -c <cluster_name_or_ID>

Save a copy of the API key that was provided, it cannot be retrieved again. In this example, the environment variable IBM_API_KEY has been set with the value to minimize typing.

Configuration to perform from VM in the VPC

Begin by getting the cluster private masterURL hostname and port:

$ ibmcloud oc cluster get --cluster my-private-cluster --output json | jq -r '.masterURL'
https://c104-e.private.us-east.containers.cloud.ibm.com:32359

In this example, the port is 32359 and the master hostname is c104-e.private.us-east.containers.cloud.ibm.com. Customize the following yaml file and save as oc-api-via-nlb.yaml by updating the following template, replacing <private_service_endpoint_port> with the port from the previous step.

apiVersion: v1
kind: Service
metadata:
  name: oc-api-via-nlb
  annotations:
    service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private
  namespace: default
spec:
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: <private_service_endpoint_port>
    targetPort: <private_service_endpoint_port>
---
kind: Endpoints
apiVersion: v1
metadata:
  name: oc-api-via-nlb
  namespace: default
subsets:
  - addresses:
      - ip: 172.20.0.1
    ports:
      - port: 2040   

Apply the yaml with:

oc apply -f oc-api-via-nlb.yaml

After applying, check the status of the service with oc get svc oc-api-via-nlb . After a short while the additional load balancer will be created and the hostname for it will be displayed:

oc get svc oc-api-via-nlb
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP                           PORT(S)           AGE
oc-api-via-nlb   LoadBalancer   172.21.181.211   ab8dcdef-us-east.lb.appdomain.cloud   32359:32750/TCP   90s

Use dig or another tool to determine the IP addresses assocciated with the load balancer DNS name. These will be IP's within the VPC which can be carried over the VPN connection.

You will also need to know the endpoint hostname and port that is being used for oauth token generation by OpenShift. Query the master url to find this endpoint:

$ curl -sk -XGET  -H "X-Csrf-Token: 1" 'https://c104-e.private.us-east.containers.cloud.ibm.com:32359/.well-known/oauth-authorization-server' | grep token_endpoint
"token_endpoint": "https://c104-e.private.us-east.containers.cloud.ibm.com:30652/oauth/token"

The hostname is the same, but the oauth endpoint is running on a different port - 30652 in this case.

Use dig to find the ip addresses for this hostname:

dig +short c104-e.private.us-east.containers.cloud.ibm.com
c104.private.us-east.containers.cloud.ibm.com.
prod-us-east-tugboat-883478.us-east.serviceendpoint.cloud.ibm.com.
166.9.22.43
166.9.24.35
166.9.20.80

Create another yaml file to add a service in the cluster that will connect to the oauth service using the IP addresses and port. Call this file oauth-via-np.yaml:

apiVersion: v1
kind: Service
metadata:
  name: oauth-via-np
  namespace: default
spec:
  type: NodePort
  ports:
  - protocol: TCP
    port: 30652
    targetPort: 30652
---
kind: Endpoints
apiVersion: v1
metadata:
  name: oauth-via-np
  namespace: default
subsets:
  - addresses:
      - ip: 166.9.20.80
      - ip: 166.9.22.43
      - ip: 166.9.24.35
    ports:
      - port: 30652

Apply the yaml with:

oc apply -f oauth-via-nlb.yaml

Find the NodePort that is assigned to this service:

$ oc get service oauth-via-np
NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
oauth-via-np   NodePort   172.21.93.246   <none>        30652:31252/TCP   48m

In this example, the NodePort is 31252. Now, the current VPC LoadBalancer that was created when the OC API was exposed needs to be updated with an addtitional listener for the oauth connection and a new backend pool for this listener which goes to the NodePorts on at least two workers.

Get the details of the load balancer that is running in the VPC for the oc api endpoint. Determine the hostname from the service and then use that to filter the output from the ibmcloud is load-balancers command.

$ LB_HOST=$(oc get svc oc-api-via-nlb --output=jsonpath="{.status.loadBalancer.ingress[0].hostname}")
$ LOAD_BALANCER_ID=$(ibmcloud is load-balancers  --output JSON | jq -r --arg LB_HOST "$LB_HOST" '.[] | select(.hostname==$LB_HOST) | .id')

Create a new load balancer pool. For consistency with the pool already defined in the load balancer, name this pool by the protocol, listening port (30652) and the service nodeport (31252).

$ ibmcloud is load-balancer-pool-create tcp-30652-31252 $LOAD_BALANCER_ID round_robin tcp 10 2 2 tcp
Creating pool tcp-30652-31929 of load balancer r014-ab8dcdef-de64-4547-b5b2-87af7b4faf5e under account ABCD as user myid@ibm.com...
                              
ID                         r014-55d8b2e5-bb7a-4734-a2da-8a409b4c7f7a   
Name                       tcp-30652-31929   
Protocol                   tcp   
Algorithm                  round_robin   
...

for later use assign the pool id to an environment variable: LB_OAUTH_POOL=r014-55d8b2e5-bb7a-4734-a2da-8a409b4c7f7a

Add two backend members using the VPC IP address of cluster worker nodes and NodePort. (You can obtain the IP addresses of the workers with the ibmcloud oc workers --cluster my-private-cluster command)

ibmcloud is load-balancer-pool-member-create $LOAD_BALANCER_ID $LB_OAUTH_POOL 31252 172.26.0.4
ibmcloud is load-balancer-pool-member-create $LOAD_BALANCER_ID $LB_OAUTH_POOL 31252 172.26.0.5

if an error is shown on the second command, wait a half minute and re-issue the command

Add a listener pool to the load balancer, specifying the newly created backend pool.

ibmcloud is load-balancer-listener-create $LOAD_BALANCER_ID --port 30652 --protocol tcp --default-pool $LB_OAUTH_POOL

Finally, verify that the listener and backend are ready. Repeat the command until the Health column shows "ok" for both members.

$ ibmcloud is load-balancer-pool-members $LOAD_BALANCER_ID $LB_OAUTH_POOL
Listing members of load balancer pool r014-55d8b2e5-bb7a-4734-a2da-8a409b4c7f7a under account ABCD as user myid@ibm.com...
ID                                          Port    Target       Weight   Health   Created                         Provision status   
r014-6dc37efc-98b7-4ed2-ac9a-d4688471599f   31252   172.26.0.5   50       ok       2020-10-15T06:27:06.261-07:00   active   
r014-8b1a09ae-a8c6-4859-91c6-67db27f28b1f   31252   172.26.0.4   50       ok       2020-10-15T06:27:36.288-07:00   active   

This completes the configuration in the VPC.

Next, switch to a workstation which has VPN connectivity to the VPC where the load balancer resides.

Either at the DNS zone level for the enterprise where the VPN is being created or at specific workstations, configure a hostname for the private master URL host that maps to the IP address of the load balancer, in this example, c104-e.private.us-east.containers.cloud.ibm.com will need to be mapped to the load balancer. For example add to the /etc/hosts file on Linux or MacOS a line with the IP address followed by the hostname:


172.26.0.11 c104-e.private.us-east.containers.cloud.ibm.com

From the workstation (copying over the API key from the saved copy):

$ oc login -u apikey -p  $IBM_API_KEY  --server=https://c104-e.private.us-east.containers.cloud.ibm.com:32359
Login successful.

You have access to 61 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".
Welcome! See 'oc help' to get started.

Also from a web browser, it will be possible to navigate to the IBM Cloud console (presuming that the enterprise network has general access to cloud.ibm.com) and from the OpenShift Dashboard page for the cluster, access the OpenShift Web Console. The console itself runs on the private ingress endpoint, but the access required for the console to the OpenShift API and also oauth for authentication for user access will flow through the L4 load balancer running in the container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment