Skip to content

Instantly share code, notes, and snippets.

@timroster
Last active February 8, 2021 01:45
Show Gist options
  • Save timroster/d515f59756bb1870191e23fb09183b06 to your computer and use it in GitHub Desktop.
Save timroster/d515f59756bb1870191e23fb09183b06 to your computer and use it in GitHub Desktop.
Configuring merged ingress resources with OpenShift haproxy-based native ingress controller

Using a single host with multiple ingresses on OpenShift - Part 2 - native ingress support

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the reverse proxy will send /foo/bar as the rewritten path to the component and include a request header like X-Proxy-BaseUri: /sub-module.

When migrating this application to containers and Kubernetes, it is desirable to keep these macro components in their own deployments to allow the number of replicas to be scaled independently and using helm charts with templates for each deployment and associated path for ingress on the host allow for different annotations including addition any custom request headers.

Implementation with OpenShift support for ingress

Red Hat OpenShift includes a network abstraction called a Route which has some very simple usage patterns including automatic hostname definition in combination with a wildcard domain for all hosted applications, which can simplify usage of TLS through wildcard certificates associated with the domain. OpenShift 4.6 with Kubernetes 1.19 also supports Ingress resources using the api version networking.k8s.api/v1 as well as backward support for deprecated apis.

It is possible to create multiple Ingress resources with different paths for the same host in the same namespace on OpenShift. Its also possible to change the default configuration to allow these Ingress resources to be created in multiple namespaces to support teams that are independently deploying microservice components for an application in different namespaces.

The OpenShift ingress controller (haproxy-based) supports sticky session affinity out of the box and has a specific set of supported annotations. There is an annotation that allows cookie-based session affinity to be turned off. Significantly missing for this scenario is the ability to provide configuration snippets as can be done with both the Kubernetes nginx-based ingress controller and the NGINX ingress controller. To overcome this, a sidecar based on nginx can be deployed to add the necessary header to all requests.

The example deployments will follow the general pattern used by NGINX examples around a cafe metaphor. These examples will deploy a set of pods for the main application (the cafe) and a set of pods for the sub module (a coffee bar) at the cafe. The nginx sidecar will be deployed with the coffee bar pod.

Begin with a RedHat OpenShift 4.6 instance, either with CRC or a Cloud or on-premise cluster. These examples will use a default host corresponding to the CRC built-in domain: my-ocp-cafe.apps-crc.testing. If using a non CRC cluster, adjust the host value accordingly.

In a new project create a config map from the nginx.conf:

oc create configmap nginx-conf --from-file=nginx.conf

After the configmap is added, create the deployments and services for the two components.

Verify that all pods come up with oc get pods. You can also verify the defined services with oc get svc. Now it is time to set up the ingress for the host my-ocp-cafe.apps-crc.testing.

Create the ingress for the main cafe and then for the coffee bar

Even though Ingress resources are being added, OpenShift will also create and display routes (even in the Web UI) associated with the broader and more specific path on the host:

$ oc get route
NAME                   HOST/PORT                      PATH       SERVICES     PORT   TERMINATION   WILDCARD
cafe-coffeebar-wjs9x   my-ocp-cafe.apps-crc.testing   /coffee/   coffee-svc   http                 None
cafe-main-xb94q        my-ocp-cafe.apps-crc.testing   /          cafe-svc     http                 None

When these are defined, inspection of the server configuration file in the /var/lib/haproxy/conf/haproxy.config on the router default controller pod will have entries for both paths as backends to the respective set of pod endpoints.

Expected output - in these examples:

Request to set cookie being sent:

curl -I http://my-ocp-cafe.apps-crc.testing/main/cafe
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 08 Feb 2021 01:41:31 GMT
Content-Type: text/plain
Content-Length: 162
Expires: Mon, 08 Feb 2021 01:41:30 GMT
Cache-Control: no-cache
Set-Cookie: mycookie=76fc125486b0801b293ccfffa7f8dec1; path=/; HttpOnly
curl http://my-ocp-cafe.apps-crc.testing/main/cafe
Server address: 10.116.0.84:8080
Server name: cafe-684d654676-nfxnz
Date: 08/Feb/2021:01:40:42 +0000
URI: /main/cafe
Request ID: 844fc8f3d806e07070a22aa9ddff9f87
 curl http://my-ocp-cafe.apps-crc.testing/coffee/mocha
<html><body>
<i>Request Host</i>:  my-ocp-cafe.apps-crc.testing </br>
<i>Request URL</i>:  /mocha </br>
<i>Context path</i>:  mocha </br>
<i>Client IP (RemoteAddr)</i>:  127.0.0.1:34780 </br>
<i>Request TLS</i>:  <nil> </br>
</br><b>Request Headers:</b></br>
<ul>
<li><i> Accept </i>: [*/*] </li>
<li><i> Connection </i>: [close] </li>
<li><i> Forwarded </i>: [for=192.168.130.1;host=my-ocp-cafe.apps-crc.testing;proto=http] </li>
<li><i> User-Agent </i>: [curl/7.58.0] </li>
<li><i> X-Forwarded-For </i>: [192.168.130.1] </li>
<li><i> X-Forwarded-Host </i>: [my-ocp-cafe.apps-crc.testing] </li>
<li><i> X-Forwarded-Port </i>: [80] </li>
<li><i> X-Forwarded-Proto </i>: [http] </li>
<li><i> X-Proxy-Baseuri </i>: [/coffee] </li>
</ul>
</br><b>Form parameters:</b></br>
<ul>
</ul>
</body></html>
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-main
annotations:
router.openshift.io/cookie_name: mycookie
haproxy.router.openshift.io/timeout: 86400s
haproxy.router.openshift.io/cookie-same-site: Strict
spec:
rules:
- host: my-ocp-cafe.apps-crc.testing
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: cafe-svc
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: keeperlink/request-info-docker:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "200m"
memory: "64Mi"
- name: nginx
image: nginx:1.18
ports:
- containerPort: 8000
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-conf
subPath: nginx.conf
readOnly: true
resources:
requests:
cpu: "200m"
memory: "64Mi"
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cafe
spec:
replicas: 3
selector:
matchLabels:
app: cafe
template:
metadata:
labels:
app: cafe
spec:
containers:
- name: cafe
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
resources:
requests:
cpu: "200m"
memory: "64Mi"
---
apiVersion: v1
kind: Service
metadata:
name: cafe-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: cafe
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-coffeebar
annotations:
haproxy.router.openshift.io/rewrite-target: /
haproxy.router.openshift.io/timeout: 86400s
haproxy.router.openshift.io/cookie-same-site: Strict
spec:
rules:
- host: my-ocp-cafe.apps-crc.testing
http:
paths:
- path: /coffee/
pathType: ImplementationSpecific
backend:
service:
name: coffee-svc
port:
number: 80
#user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
server {
listen 8000 default_server;
listen [::]:8000 default_server;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Proxy-BaseUri "/coffee";
proxy_pass http://backend;
}
}
upstream backend {
zone backend 256k;
server 127.0.0.1:8080;
}
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment