Skip to content

Instantly share code, notes, and snippets.

@miminar
Last active March 17, 2020 15:04
Show Gist options
  • Save miminar/c58fffdd762e773af83b15bcb1a4cc8c to your computer and use it in GitHub Desktop.
Save miminar/c58fffdd762e773af83b15bcb1a4cc8c to your computer and use it in GitHub Desktop.
AWS Metadata Proxy

AWS Metadata Proxy

Background: https://access.redhat.com/solutions/4498111

The OpenShift template deploys a proxy pod that can be used to access AWS metadata service.

Deploy

oc new-project awsproxytest
oc adm policy add-scc-to-user hostnetwork -z awsproxy
oc new-app https://gist.githubusercontent.com/miminar/c58fffdd762e773af83b15bcb1a4cc8c/raw/b1b6bab1359321a7e97f682716dc64773c7428fb/tmpl-awsproxy.yaml

NOTE: Ports 8088 and 8043 must be available on the nodes. If one or both are occupied, you can override them like this:

oc new-app HTTP_PORT=20080 HTTPS_PORT=20443 https://gist.githubusercontent.com/miminar/c58fffdd762e773af83b15bcb1a4cc8c/raw/b1b6bab1359321a7e97f682716dc64773c7428fb/tmpl-awsproxy.yaml

NOTE: To make this fail-proof, one can modify the template to instantiate DaemonSet instead of Deployment to make the proxy available on all the compute nodes.

Test

Deploy another pod in cluster's network with curl binary and query the endpoints.

oc run fedora --image=fedora:latest /bin/sleep infinity
oc rollout status dc/fedora
# this will not work as expected
oc rsh dc/fedora curl -v http://169.254.169.254/latest/meta-data/iam/security-credentials
# this should work - mind the project name (awsproxytest)
oc rsh dc/fedora curl -v http://awsproxy.awsproxytest.svc.cluster.local/latest/meta-data/iam/security-credentials
oc delete dc/fedora

Example output:

$ oc run fedora --image=fedora:latest /bin/sleep infinity
kubectl run --generator=deploymentconfig/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deploymentconfig.apps.openshift.io/fedora created                                                                     

$ oc rollout status dc/fedora
Waiting for rollout to finish: 0 of 1 updated replicas are available...
Waiting for latest deployment config spec to be observed by the controller loop...
replication controller "fedora-1" successfully rolled out             
$ oc rsh dc/fedora curl -v http://169.254.169.254/latest/meta-data/iam/security-credentials
*   Trying 169.254.169.254:80...
* TCP_NODELAY set
* connect to 169.254.169.254 port 80 failed: Connection refused
* Failed to connect to 169.254.169.254 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 169.254.169.254 port 80: Connection refused
command terminated with exit code 7

$ oc rsh dc/fedora curl -v http://awsproxy.awsproxytest.svc.cluster.local/latest/meta-data/iam/security-credentials
*   Trying 172.30.68.28:80...
* TCP_NODELAY set
* Connected to awsproxy.awsproxytest.svc.cluster.local (172.30.68.28) port 80 (#0)
> GET /latest/meta-data/iam/security-credentials HTTP/1.1
> Host: awsproxy.awsproxytest.svc.cluster.local
> User-Agent: curl/7.66.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Accept-Ranges: bytes
< Content-Length: 22
< Content-Type: text/plain
< Date: Tue, 17 Mar 2020 14:41:03 GMT
< Last-Modified: Tue, 17 Mar 2020 14:05:33 GMT
< Server: EC2ws
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
< 
* Connection #0 to host awsproxy.awsproxytest.svc.cluster.local left intact
gdir-tqz52-worker-role
kind: Template
apiVersion: v1
objects:
- kind: ServiceAccount
apiVersion: v1
metadata:
name: awsproxy
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: awsproxy
app: awsproxy
spec:
replicas: 1
selector:
matchLabels:
app: awsproxy
template:
metadata:
labels:
app: awsproxy
spec:
hostNetwork: true
containers:
- name: awsproxy
image: gcr.io/google-containers/haproxy:${HAPROXY_IMAGE_TAG}
ports:
- name: awshttpproxy
containerPort: ${{HTTP_PORT}}
- name: awshttpsproxy
containerPort: ${{HTTPS_PORT}}
readinessProbe:
tcpSocket:
port: ${{HTTP_PORT}}
livenessProbe:
tcpSocket:
port: ${{HTTP_PORT}}
volumeMounts:
- mountPath: /etc/haproxy
name: haproxy-config
readOnly: true
serviceAccountName: awsproxy
volumes:
- configMap:
name: awsproxy-haproxy-config
name: haproxy-config
- apiVersion: v1
kind: ConfigMap
metadata:
name: awsproxy-haproxy-config
data:
haproxy.cfg: |-
# Default haproxy config file.
global
daemon
stats socket /tmp/haproxy
server-state-file global
server-state-base /var/state/haproxy/
defaults
mode http
option dontlognull
option dontlog-normal
timeout connect 5000
timeout client 50000
timeout server 50000
frontend httpfrontend
# Frontend bound on all network interfaces on port 80
bind *:${HTTP_PORT}
mode http
use_backend awshttpbackend
backend awshttpbackend
mode http
server aws 169.254.169.254:80 check
frontend httpsfrontend
# Frontend bound on all network interfaces on port 80
bind *:${HTTPS_PORT}
mode tcp
use_backend awshttpsbackend
backend awshttpsbackend
mode tcp
server aws 169.254.169.254:443 check
- apiVersion: v1
kind: Service
metadata:
labels:
app: awsproxy
name: awsproxy
spec:
ports:
- name: awshttpport
port: 80
protocol: TCP
targetPort: ${{HTTP_PORT}}
- name: awshttpsport
port: 443
protocol: TCP
targetPort: ${{HTTPS_PORT}}
selector:
app: awsproxy
sessionAffinity: None
type: ClusterIP
parameters:
- name: HAPROXY_IMAGE_TAG
required: true
value: "0.4"
- name: HTTP_PORT
required: true
value: "8088"
- name: HTTPS_PORT
required: true
value: "8043"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment