Skip to content

Instantly share code, notes, and snippets.

@iMartyn
Created November 21, 2017 09:20
Show Gist options
  • Save iMartyn/3ef82c5e6e8c0ab0416fc624873ee3d2 to your computer and use it in GitHub Desktop.
Save iMartyn/3ef82c5e6e8c0ab0416fc624873ee3d2 to your computer and use it in GitHub Desktop.

Because people don't seem to be getting why this is a bug, I'm gonna try and show it here in clear examples :

This example is of for example a nodejs microservice with pods with label app: data-app listening on Port: 3000 This is exposed as a service in the cluster as a http endpoint and should be accessible inside the cluster as http://coolservice(.namespace.svc.cluster.local)/ This should also be exposed as https://mycoolservice.example.com/ It should not be exposed inside the cluster as http://coolservice:443/ as that is whilst not RFC breaking afaik, extremely bad practice and extremely hard for users to reason about.

Exhibit A : Listens externally on port 80 as SSL! :

apiVersion: v1
kind: Service
metadata:
  name: coolservice
  annotations:
    dns.alpha.kubernetes.io/external: |
      mycoolservice.example.com, otherdnsname.example.com
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:654564658:certificate/654ba45-0000-0000-000000000000
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    app: data-app
  type: LoadBalancer
status:
  loadBalancer: {}

Note that adding the annotation service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443 does not make this do the right thing.

Exhibit B: Listens internally on 443 (Obviously)

apiVersion: v1
kind: Service
metadata:
  name: coolservice
  annotations:
    dns.alpha.kubernetes.io/external: |
      mycoolservice.example.com, otherdnsname.example.com
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:654564658:certificate/654ba45-0000-0000-000000000000
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
  ports:
  - name: http
    port: 443
    protocol: TCP
    targetPort: 3000
  selector:
    app: data-app
  type: LoadBalancer
status:
  loadBalancer: {}

Exhibit C: The closest thing we can get to work, but is still wrong :

apiVersion: v1
kind: Service
metadata:
  name: coolservice
  annotations:
    dns.alpha.kubernetes.io/external: |
      mycoolservice.example.com, otherdnsname.example.com
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:654564658:certificate/654ba45-0000-0000-000000000000
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 3000
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    app: data-app
  type: LoadBalancer
status:
  loadBalancer: {}

This means that the sevice is internally exposing port 443, okay, it can be reached by http://coolservice/ but if someone sees it listening on 443 internally, they would reasonably assume that they can talk https on port 443, which they cannot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment