Skip to content

Instantly share code, notes, and snippets.

@sevein
Last active February 15, 2018 18:20
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sevein/d30e1791fbc0db786884360486e70737 to your computer and use it in GitHub Desktop.
Save sevein/d30e1791fbc0db786884360486e70737 to your computer and use it in GitHub Desktop.
Split contour and envoy into separate pods

🚧 Don't use this deployment, it is not safe! 🚧


This is an example created for projectcontour/contour#238.

It uses sevein/contour:test because #238 had not been merged by the time that this example was built.

It includes:

  • config.yaml:

    • Cluster type STRICT_DNS, address contour.heptio-contour.svc.cluster.local
  • contour.yaml:

    • Envoy and Contour running separately. TLS secret data is shared inline.
dynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
cluster_names: [contour]
grpc_services:
- envoy_grpc:
cluster_name: contour
cds_config:
api_config_source:
api_type: GRPC
cluster_names: [contour]
grpc_services:
- envoy_grpc:
cluster_name: contour
static_resources:
clusters:
- name: contour
connect_timeout: { seconds: 5 }
type: STRICT_DNS
hosts:
- socket_address:
address: contour.heptio-contour.svc.cluster.local
port_value: 8001
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
admin:
access_log_path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 9001
#
#
# Deployment example where envoy and contour are split into separate pods
#
# DaemonSet: envoy-front-proxy
# - Container: envoy
# - InitContainer:
# - Using Strict DNS: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/service_discovery.html#arch-overview-service-discovery-types
# - Command: `contour bootstrap --xds-address contour.heptio-contour.svc.cluster.local` --xds-port 8001
#
# DaemonSet: contour
# - Container: contour serve --incluster --xds-address 0.0.0.0 --xds-port 8001
#
#
---
apiVersion: v1
kind: Namespace
metadata:
name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contour
namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: envoy-front-proxy
name: envoy-front-proxy
namespace: heptio-contour
spec:
selector:
matchLabels:
app: envoy-front-proxy
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: envoy-front-proxy
spec:
containers:
- image: docker.io/envoyproxy/envoy-alpine:latest
name: envoy
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
command: ["envoy"]
args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
volumeMounts:
- name: contour-config
mountPath: /config
initContainers:
- image: sevein/contour:test
imagePullPolicy: Always
name: envoy-initconfig
command: ["contour"]
args: ["bootstrap", "--xds-address", "contour.heptio-contour.svc.cluster.local", "--xds-port", "8001", "/config/contour.yaml"]
volumeMounts:
- name: contour-config
mountPath: /config
volumes:
- name: contour-config
emptyDir: {}
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: contour
name: contour
namespace: heptio-contour
spec:
selector:
matchLabels:
app: contour
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: contour
spec:
containers:
- image: sevein/contour:test
imagePullPolicy: Always
name: contour
command: ["contour"]
args: ["serve", "--incluster", "--xds-address", "0.0.0.0", "--xds-port", "8001"]
ports:
- containerPort: 8001
name: http
dnsPolicy: ClusterFirst
serviceAccountName: contour
terminationGracePeriodSeconds: 30
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: contour
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: contour
subjects:
- kind: ServiceAccount
name: contour
namespace: heptio-contour
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: contour
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: Service
metadata:
name: envoy-front-proxy
namespace: heptio-contour
annotations:
# This annotation puts the AWS ELB into "TCP" mode so that it does not
# do HTTP negotiation for HTTPS connections at the ELB edge.
# The downside of this is the remote IP address of all connections will
# appear to be the internal address of the ELB. See docs/proxy-proto.md
# for information about enabling the PROXY protocol on the ELB to recover
# the original remote IP address.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: envoy-front-proxy
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: contour
namespace: heptio-contour
spec:
ports:
- port: 8001
name: http
protocol: TCP
targetPort: 8001
selector:
app: contour
type: LoadBalancer
@davecheney
Copy link

There is one gotcha with this config. Anyone who knows the name of the contour api server, can connect to it and get the tls secret for any deployed cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment