Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@timroster
Last active February 8, 2021 01:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save timroster/37ab2c5a5e5e658d8b875e5cded59312 to your computer and use it in GitHub Desktop.
Save timroster/37ab2c5a5e5e658d8b875e5cded59312 to your computer and use it in GitHub Desktop.
Configuring merged ingress resources with NGINX inc based NGINX Operator ingress controller

Using a single host with multiple ingresses on OpenShift - Part 1 - NGINX Operator

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the reverse proxy will send /foo/bar as the rewritten path to the component and include a request header like X-Proxy-BaseUri: /sub-module.

When migrating this application to containers and Kubernetes, it is desirable to keep these macro components in their own deployments to allow the number of replicas to be scaled independently and using helm charts with templates for each deployment and associated path for ingress on the host allow for different annotations including addition any custom request headers.

Implementation with NGINX Operator

The NGINX Operator can manage the creation of nginx-ingress controllers from NGINX (subsidiary of F5). For more information on deployment and configuration check out the getting started blog

The example deployments will follow the general pattern used by NGINX examples around a cafe metaphor. These examples will deploy a set of pods for the main application (the cafe) and a set of pods for the sub module (a coffee bar) at the cafe.

Begin with a RedHat OpenShift 4.6 instance, either with CRC or a Cloud or on-premise cluster. Follow the steps from the blog to install the NGINX Operator. After the operator is added, the blog provides an example CR to create the ingress controller. The example selects a LoadBalancer type, or if you have CRC, you can also just select a NodePort and configure a L4 balancer on the CRC host to forward connections to the nginx controller service NodePorts. It's also technically possible to configure an OpenShift route in the .apps-crc.testing domain to take incoming requests and send them to the nginx controller service. Note - this route needs to be created in the same project as the nginx controller deployment, which is the same namespace where the operator is created. This is one of the arguments in favor of using a separate L4 balancer and a different host endpoint that .apps-crc.testing for this example.

In this example, all resources will specify the generic hostname cafe.example.com will be used and needs to be changed based on the approach to bring in cluster traffic.

In a project - which is easiest if it's the same as the pre-defined route, or a totally new project if using LoadBalancer to bring in traffic to the ingress controller. Create the deployments and services for the two components.

Verify that all pods come up with oc get pods. You can also verify the defined services with oc get svc. Now it is time to set up the ingress for the host cafe.example.com. One "feature" of the NGINX ingress controller (and not the nginx controller in the kubernetes project) is that multiple paths defined in different Ingress resources that target the same host is considered a host collision.

For this reason, the approach that works here is to use the master and minion model. Note: the master resource is not allowed to contain any This gist includes files for the cafe ingress master, the cafe ingress minion and the coffee ingress minion

Notice how there are different annotations that are being applied to the minion resources. When all of these are defined, inspection of the server configuration file in the /etc/nginx/conf.d path on the NGINX ingress controller pod will have entries for both paths as locations off the cafe.example.com server name. There is no conflict between the / and /coffee locations because the more specific one is used for all request URI that match the /coffee path.

Expected output - in these examples, I have used the domain cafe.timro.us which resolves to an L4 balancer that points to a CRC instance.

curl http://cafe.timro.us/main/cafe
Server address: 10.116.1.82:8080
Server name: cafe-849c5f65c9-jct7b
Date: 08/Feb/2021:00:51:33 +0000
URI: /main/cafe
Request ID: fa48c5b35b5e2a9306555e73cbde5f50
curl http://cafe.timro.us/coffee/mocha
<html><body>
<i>Request Host</i>:  cafe.timro.us </br>
<i>Request URL</i>:  /mocha </br>
<i>Context path</i>:  mocha </br>
<i>Client IP (RemoteAddr)</i>:  10.116.1.103:45678 </br>
<i>Request TLS</i>:  <nil> </br>
</br><b>Request Headers:</b></br>
<ul>
<li><i> Accept </i>: [*/*] </li>
<li><i> Connection </i>: [close] </li>
<li><i> User-Agent </i>: [curl/7.58.0] </li>
<li><i> X-Forwarded-For </i>: [10.116.0.1] </li>
<li><i> X-Forwarded-Host </i>: [cafe.timro.us] </li>
<li><i> X-Forwarded-Port </i>: [80] </li>
<li><i> X-Forwarded-Proto </i>: [http] </li>
<li><i> X-Proxy-Baseuri </i>: [/coffee] </li>
<li><i> X-Real-Ip </i>: [10.116.0.1] </li>
</ul>
</br><b>Form parameters:</b></br>
<ul>
</ul>
</body></html>

From the second output, it's clear to see that the URI is being re-written and the custom header is being added to the request.

A significant limitation of the "free" NGINX ingress controller provided by the operator is that using cookies for sticky sessions is not supported. This requires a paid plan and version of the NGINX ingress controller and also explains why the annotation in the cafe-ingress-minion.yaml file uses the domain prefix nginx.com for the sticky cookie services annotation.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-master
annotations:
nginx.org/mergeable-ingress-type: "master"
spec:
ingressClassName: nginx
tls: []
rules:
- host: cafe.example.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-minion
annotations:
nginx.org/mergeable-ingress-type: minion
nginx.com/sticky-cookie-services: "serviceName=cafe-svc mycookie expires=1728000 path=/"
spec:
ingressClassName: nginx
rules:
- host: cafe.example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: cafe-svc
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: keeperlink/request-info-docker:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "200m"
memory: "64Mi"
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cafe
spec:
replicas: 3
selector:
matchLabels:
app: cafe
template:
metadata:
labels:
app: cafe
spec:
containers:
- name: cafe
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
resources:
requests:
cpu: "200m"
memory: "64Mi"
---
apiVersion: v1
kind: Service
metadata:
name: cafe-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: cafe
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: coffee-ingress-minion
annotations:
nginx.org/mergeable-ingress-type: minion
nginx.org/rewrites: "serviceName=coffee-svc rewrite=/"
nginx.org/location-snippets: |
proxy_set_header X-Proxy-BaseUri "/coffee";
spec:
ingressClassName: nginx
rules:
- host: cafe.example.com
http:
paths:
- path: /coffee/
pathType: ImplementationSpecific
backend:
service:
name: coffee-svc
port:
number: 80
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment