Skip to content

Instantly share code, notes, and snippets.

@tsaarni
Last active April 20, 2023 14:03
Show Gist options
  • Save tsaarni/6787a3bee55b72771df913e73e142664 to your computer and use it in GitHub Desktop.
Save tsaarni/6787a3bee55b72771df913e73e142664 to your computer and use it in GitHub Desktop.
Step-by-step development tutorial: making a code change to Contour

Step-by-step development tutorial: making a code change to Contour and seeing live results

This tutorial is a step-by-step guide to making a small code change to Contour. It shows how to run Contour locally on your laptop and have it control Envoy(s) running in a Kind cluster. It allows for a very fast feedback cycle and easy debugging.

Preparation

Create a Kind cluster, for example by running:

make setup-kind-cluster

Deploy the latest release the usual way:

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

Since the target is to run Contour locally on the host we can scale the deployment inside the cluster to 0 replicas.

kubectl -n projectcontour scale deployment --replicas=0 contour

Envoy will look up for host called contour which normally resolves to the Contour pod IP address. We would like Envoy to use the host IP address instead. Here is one way to achieve that: create a service without a selector that would associate the service to the Contour pod. Then create Endpoints instance manually that will point to the Kind network IP address.

cat <<EOF | sed "s/REPLACE_ADDRESS_HERE/$(docker network inspect kind | jq -r '.[0].IPAM.Config[0].Gateway')/" | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
  name: contour
  namespace: projectcontour
spec:
  type: ClusterIP
  ports:
  - port: 8001
    targetPort: 8001
---
kind: Endpoints
apiVersion: v1
metadata:
  name: contour
  namespace: projectcontour
subsets:
 - addresses:
     - ip: REPLACE_ADDRESS_HERE
   ports:
     - port: 8001
EOF

Envoy will now attempt to reach Contour over at the host side.

The communication between Contour and Envoy uses TLS and we need a CA certificate to validate Envoy's client certificate and the server certificate and private key for the TLS server. These have been previously created by contour-certgen job and we can fetch them from the cluster.

kubectl -n projectcontour get secret contourcert -o jsonpath='{..ca\.crt}' | base64 -d > ca.crt
kubectl -n projectcontour get secret contourcert -o jsonpath='{..tls\.crt}' | base64 -d > tls.crt
kubectl -n projectcontour get secret contourcert -o jsonpath='{..tls\.key}' | base64 -d > tls.key

Running Contour

Contour can be run directly with go run.

go run github.com/projectcontour/contour/cmd/contour serve --xds-address=0.0.0.0 --xds-port=8001 --envoy-service-http-port=8080 --envoy-service-https-port=8443 --contour-cafile=ca.crt --contour-cert-file=tls.crt --contour-key-file=tls.key --debug

Alternatively, Contour can be also launched in a debugger, for example from vscode.

Shortly a few debug log lines appear: "handling v3 xDS resource request". This means that Envoy has connected to Contour.

Deploy echoserver and configure HTTPProxy that exposes the service externally.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: echoserver
  template:
    metadata:
      labels:
        app.kubernetes.io/name: echoserver
    spec:
      containers:
      - name: echoserver
        image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - name: http-api
          containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: http-api
  selector:
    app.kubernetes.io/name: echoserver
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: echoserver
spec:
  virtualhost:
    fqdn: local.projectcontour.io
  routes:
    - services:
        - name: echoserver
          port: 80
EOF

Echoserver is a simple application that can be used to test Contour. It will accept any request and return a simple JSON response. To learn more about its features, see the code here.

Check that the HTTPProxy status became valid.

kubectl get httpproxy echoserver

Test that the echoserver can be reached,

curl http://local.projectcontour.io:9080

Change the code

Let's make a small modification to the code that changes the string that Envoy sends back as the HTTP server header in all responses. But first, make an HTTP request once again with curl -v to see the response headers. Envoy responds with HTTP header server: envoy.

Change the file internal/envoy/v3/listener.go in the following way:

--- a/internal/envoy/v3/listener.go
+++ b/internal/envoy/v3/listener.go
@@ -485,6 +485,7 @@ func (b *httpConnectionManagerBuilder) Get() *envoy_listener_v3.Filter {
                StreamIdleTimeout:   envoy.Timeout(b.streamIdleTimeout),
                DrainTimeout:        envoy.Timeout(b.connectionShutdownGracePeriod),
                DelayedCloseTimeout: envoy.Timeout(b.delayedCloseTimeout),
+               ServerName:          "foobar",
        }

        // Max connection duration is infinite/disabled by default in Envoy, so if the timeout setting

Interrupt Contour and go run it again. Contour will send a new configuration to Envoy and when runnning curl -v again, the server header is now foobar. For a description of the ServerName configuration field, see Envoy documentation here.

This concludes the tutorial. Happy hacking!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment