Skip to content

Instantly share code, notes, and snippets.

@timroster
Last active January 25, 2023 16:30
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save timroster/46c3dcd3a5215311000b3de1ef28e31a to your computer and use it in GitHub Desktop.
Save timroster/46c3dcd3a5215311000b3de1ef28e31a to your computer and use it in GitHub Desktop.

Setting up private endpoints and dashboard for VPN to VPC - option 2

Begin with a Red Hat OpenShift on IBM Cloud (ROKS) cluster with private endpoints only (public endpoints disabled). Private endpoints resolve in the DNS to IP addresses that are provided by Private Service Endpoints which typically begin with IP address octets of 166. Although the Private Service Endpoints are routable through the implicit internal router that VPC hosts can reach, these addresses are not routed over VPN connections to a VPC. Therefore, in order to manage this cluster from off the VPC, an additional load-balanced service will be added for Kubernetes API access. For more information in the IBM documentation see Accessing VPC clusters through the private service endpoint

The most straightforward way to initially manage a VPC-based ROKS cluster is by adding a Linux VM to one of the subnets where the workers reside (install the ibmcloud and oc cli's to this vm as well as the container-service and infrastructure-service cli plugins). These instructions are written to be run from that VM. To begin, ensure that the RBAC role and permissions of your IAM user are applied to the OpenShift cluster. This can be accomplished by using the web UI to launch a session to the OpenShift console or by performing the ibmcloud oc cluster config command as the user or service ID which will interact with the Cluster API endpoint.

ibmcloud iam api-key-create <name>
ibmcloud login --apikey <API_key>
ibmcloud oc cluster config -c <cluster_name_or_ID>

Save a copy of the API key that was provided, it cannot be retrieved again. In this example, the environment variable IBM_API_KEY has been set with the value to minimize typing.

Begin by getting the cluster masterURL hostname and port:

$ ibmcloud oc cluster get --cluster my-private-cluster --output json | jq -r '.masterURL'
https://c104-e.private.us-east.containers.cloud.ibm.com:32359

In this example, the port is 32359 and the private service endpoint hostname is c104.private.us-east.containers.cloud.ibm.com and the master hostname is c104-e.private.us-east.containers.cloud.ibm.com.

Customize the following yaml file and save as oc-api-via-np.yaml by updating the following template, replacing <private_service_endpoint_port> with the port from the previous step.

apiVersion: v1
kind: Service
metadata:
  name: oc-api-via-np
  namespace: default
spec:
  type: NodePort
  ports:
  - protocol: TCP
    port: <private_service_endpoint_port>
    targetPort: <private_service_endpoint_port>
---
kind: Endpoints
apiVersion: v1
metadata:
  name: oc-api-via-np
  namespace: default
subsets:
  - addresses:
      - ip: 172.20.0.1
    ports:
      - port: 2040

Apply the yaml to the cluster.

oc apply -f oc-api-via-nlb.yaml

After applying, obtain the NodePort exposed for the service.

 oc get svc oc-api-via-np
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
oc-api-via-np   NodePort   172.21.193.127   <none>        32359:30160/TCP   54m

In this example, the NodePort is 30160. You will also need to know the endpoint hostname and port that is being used for oauth token generation by OpenShift. Query the master url to find this endpoint:

$ curl -sk -XGET  -H "X-Csrf-Token: 1" 'https://c104-e.private.us-east.containers.cloud.ibm.com:32359/.well-known/oauth-authorization-server' | grep token_endpoint
"token_endpoint": "https://c104-e.private.us-east.containers.cloud.ibm.com:30652/oauth/token"

The hostname is the same, but the oauth endpoint is running on a different port.

Next, define an L4 load balancer which will run on an IP address in the VPC and forward traffic that is targeting the cluster API port, 32359 in this example to the cluster workers at the NodePort created by the oc-api-via-nb service. This L4 load balancer should forward traffic that is for the token endpoint port 30652 in this example on to the correct server. For example with nginx you would add to the configuration a section like:

stream {
    server {
        listen  30652;
        proxy_pass c104-e.private.us-east.containers.cloud.ibm.com:30652;
    }

    server {
        listen  32359;
        proxy_pass oc_api_nodeport;
    }

    upstream oc_api_nodeport {
        server 172.26.0.4:30160;
        server 172.26.0.5:30160;
    } 
}

If you have added this to a custom nginx.conf file located in a directory called nginx, then start up the L4 load balancer with podman using:

sudo podman run --name my-tugboat -v $(pwd)/nginx/nginx.conf:ro -p30652:30652 -p32359:32359 -d nginx

This completes the configuration in the VPC.

Next, switch to a workstation which has VPN connectivity to the VPC where the L4 load balancer resides.

Either at the DNS zone level for the enterprise where the VPN is being created or at specific workstations, configure a hostname for the private service endpoint and private master that maps to the IP address of the L4 load balancer, in this example, c104.private.us-east.containers.cloud.ibm.com and c104-e.private.us-east.containers.cloud.ibm.com will need to be mapped to the L4 load balancer For example add to the /etc/hosts file on Linux or MacOS a line with the IP address followed by the hostname:


172.26.0.8 c104.private.us-east.containers.cloud.ibm.com c104-e.private.us-east.containers.cloud.ibm.com

From the workstation (copying over the API key from the saved copy):

$ oc login -u apikey -p  $IBM_API_KEY  --server=https://c104-e.private.us-east.containers.cloud.ibm.com:32359
Login successful.

You have access to 61 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".
Welcome! See 'oc help' to get started.

Also from a web browser, it will be possible to navigate to the IBM Cloud console (presuming that the enterprise network has general access to cloud.ibm.com) and from the OpenShift Dashboard page for the cluster, access the OpenShift Web Console. The console itself runs on the private ingress endpoint, but the access required for the console to the OpenShift API and also oauth for authentication for user access will flow through the L4 load balancer running in the container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment