Skip to content

Instantly share code, notes, and snippets.

@bprashanth
Last active August 10, 2016 01:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bprashanth/52648b2a0b6a5b637f843e7efb2abc97 to your computer and use it in GitHub Desktop.
Save bprashanth/52648b2a0b6a5b637f843e7efb2abc97 to your computer and use it in GitHub Desktop.

Setup

aliases

kk - kubectl --namespace=kube-system
k - kubectl

Setup 2 clusters, for the sake of convenience we will setup one gce and one gke. On GCE we're running the normal e2e cluster.

$ getclusters
gke_kubernetesdev_us-central1-f_ingress-us
kubernetesdev_e2e-test-beeps

UID Asssignment

Check the ingress uid on the gke cluster:

$ kk get configmaps -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    uid: ff1107f83ed600c0
  kind: ConfigMap
  metadata:
    creationTimestamp: 2016-08-09T23:58:49Z
    name: ingress-uid
    namespace: kube-system
    resourceVersion: "100"
    selfLink: /api/v1/namespaces/kube-system/configmaps/ingress-uid
    uid: 373b703b-5e8d-11e6-a138-42010af00048
kind: List
metadata: {}

Assign the uid to the e2e cluster:

$ usecluster kubernetesdev_e2e-test-beeps
$ k get ing
$ kk edit configmaps ingress-uid
.. and assign ff1107f83ed600c0 to the uid field

You will always have to assign the same uid to 2 clusters to get them to share cloud resources, but you will not always have to do the do the next step, specifically not after (kubernetes-retired/contrib#1363) goes in. The mentioned pr watches the ingress config map for uid changes for exactly this scenario. This is also why I picked a GCE cluster, because it's easier to restart the master.

Restart the controller on the e2e cluster:

$ gcloud ssh e2e-test-beeps-master
$ docker ps | grep glbc
a73160aa5a23        bprashanth/glbc:0.7.0                                                              
$ docker kill a73160aa5a23
$ docker ps | grep glbc
c7e1dd7ad036        bprashanth/glbc:0.7.0
$ tail /var/log/glbc.log
I0810 00:22:35.561301       5 main.go:173] Starting GLBC image: glbc:0.7.1, cluster name 
I0810 00:22:36.640790       5 main.go:276] Using saved cluster uid "ff1107f83ed600c0"

Ingress 1

Create the first https ingress:

apiVersion: v1
data:
  uid: ff1107f83ed600c0
kind: ConfigMap
metadata:
  creationTimestamp: 2016-08-09T02:42:57Z
  name: ingress-uid
  namespace: kube-system
  resourceVersion: "148034"
  selfLink: /api/v1/namespaces/kube-system/configmaps/ingress-uid
  uid: fb0237be-5dda-11e6-9056-42010af00002
17:26:32-beeps~/goproj/src/k8s.io/kubernetes/pkg/registry/service] (clean)$ cat ~/rtmp/ingress/tls
tls-app.yaml  tls.yaml      
17:26:32-beeps~/goproj/src/k8s.io/kubernetes/pkg/registry/service] (clean)$ cat ~/rtmp/ingress/tls
tls-app.yaml  tls.yaml      
17:26:32-beeps~/goproj/src/k8s.io/kubernetes/pkg/registry/service] (clean)$ cat ~/rtmp/ingress/tls-app.yaml 
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lKQU95Nk1BZ3BYWkNYTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ3d4RkRBU0JnTlYKQkFNTUMyVjRZVzF3YkdVdVkyOXRNUlF3RWdZRFZRUUtEQXRsZUdGdGNHeGxMbU52YlRBZUZ3MHhOakExTVRjeApPREF3TURWYUZ3MHhOekExTVRjeE9EQXdNRFZhTUN3eEZEQVNCZ05WQkFNTUMyVjRZVzF3YkdVdVkyOXRNUlF3CkVnWURWUVFLREF0bGVHRnRjR3hsTG1OdmJUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0MKZ2dFQkFLMkxGZzFnR1dMM2MyZUdhUDRIek9ncTNWSW1wcFc0VVZEOU1mUWxyU2RmWDdDRjJ1M05taXRHNnFkawpiakQ5RXFSTkpibElSZTBMcE1aL2E4Zjlvems3M2VuT0huM0Jzd1A5RTJlSjhmODhCejk1MGZPM3dXcW5qanMzCk01NWxXMkFWZ0pvVWVTT3JSalZDakp1TzhJWHFBbDdQMlZtamlvUGdFaHV0NU9tVThaS21BRTNhcGlJR3dJZm8KenZYNjAwV0ZtdGhkQ3IrMFBMU3ZQR29jay9ySDcvbWJvbGNLVHRkdm41bGE2aUY1enVpZXRWbVA2M0wzekVZUAp0UVNoMnNRSGxVbllEZnl4a1ppU2UrQmE5ZW8wRjBlNzc0MlZhQUkzUERDTDhNZ1Z5VVkybEdXZXhLbFN5TFgzCkpGWDM5NjlXSGZ3ejNjYXhybG4wUEFpNmFqVUNBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGR0tEbU5VMWJJaGEKMWFaTDVtYkRCV2pvWTROMU1COEdBMVVkSXdRWU1CYUFGR0tEbU5VMWJJaGExYVpMNW1iREJXam9ZNE4xTUF3RwpBMVVkRXdRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUo3WDM1UXp1andEQXJHTUorOXFuRzZECkhkYUNBQTVVb2dZdWpTYmdNa3NFU1VzZzJnMDdvRG41T2N6SFF4SmQ5eWtYVzVkRmQvelpaTzZHRUlPZnFUQ0cKOUpUQ29hcEJxZnh3eHRhaTl0dUdOamYwSXpVL2NoT3JYamowS1Y1Y2paVmRZd3F3QVVUa0VEaVE4dlF3YjVFZQprTHVXNXgwQlFXT1YwdU1wengwYU1PSkgxdmdGOWJPZGpPbyt1UkpBME95SWszYmRFcmt5MWg2QmNkcUpPUTA1CkRNLzgySEdqMCtpNGRnOGptQnlnRmpmYTk3YkczenVOTm1UVkhPK3hxbHJyZUdPQ3VmRi9CWUFFc1ZyODdlWnMKd2M1UFpJamRvekNSRlNCem9YLzlSMGtQQWI3Vms4bGpJUE9yeUYzeXR3MERiQnpKZWRMSWFyWE5QYWV3QUpNPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQ3RpeFlOWUJsaTkzTm4KaG1qK0I4em9LdDFTSnFhVnVGRlEvVEgwSmEwblgxK3doZHJ0elpvclJ1cW5aRzR3L1JLa1RTVzVTRVh0QzZURwpmMnZIL2FNNU85M3B6aDU5d2JNRC9STm5pZkgvUEFjL2VkSHp0OEZxcDQ0N056T2VaVnRnRllDYUZIa2pxMFkxClFveWJqdkNGNmdKZXo5bFpvNHFENEJJYnJlVHBsUEdTcGdCTjJxWWlCc0NINk03MSt0TkZoWnJZWFFxL3REeTAKcnp4cUhKUDZ4Ky81bTZKWENrN1hiNStaV3VvaGVjN29uclZaait0eTk4eEdEN1VFb2RyRUI1VkoyQTM4c1pHWQprbnZnV3ZYcU5CZEh1KytObFdnQ056d3dpL0RJRmNsR05wUmxuc1NwVXNpMTl5UlY5L2V2VmgzOE05M0dzYTVaCjlEd0l1bW8xQWdNQkFBRUNnZ0VBSFc5MThoYld0MzZaU0huMzNQNmR0dE51YnJ5M2pMV1N0Vlg4M3hoMDRqUy8KR2tYWitIUGpMbXY4NlIrVHdTTnJ3Z3FEMTRWMnR0byt2SnhvUDZlNXc3OXZ5SFI1bjRMM1JqbnF6S2tOTHVtVApvU1NjZytZckhGZ0hPK3dGQ1Z6UHZ1Qm15N3VsUUhPUW1RQU1zV1h4VGdWL0dXM1B3L0NGVWhEemdWWmhlV3pPCmV3WTlyRFd1QXp6S1NkTWE0Rk5maWpWRllWcDE3RzUwZktPVzNaTk1yOWZjS01CSkdpdU84U1hUT0lGU0ppUFAKY1UzVVpiREJLejZOMXI5dzF1VVEralVVbDBBZ2NSOHR4Umx4YTBUUzNIUGN0TnIvK1BzYUg0ZFd5TzN1Y3RCUAo5K2lxVWh5dlBHelBUYzFuTXN4Wk9VREwreXJiNlNuVHA3L3BiYzROZ1FLQmdRRFZBOHd0K2ZITjZrS1ViV0YzCmNlOC9CMjNvakIzLytlSDJrTExONkVkSXJ2TFNPTlgzRFMvK3hxbkFlOFEzaDZadmpSSGdGKytHZkM3U2w5bS8KMGZTTWovU0VKY3BWdzBEdjB5ckU1ZzFBV25BbFpvT0E0ZWQ0N0lnb3ZsLys3ZjdGd3lRMm9lMkZJSmtzVENBSApMR1lVUUdZRFU4SkhTQXMweWwramo2NFVPUUtCZ1FEUWtEZE9pbDM0S1lQdURBaXBWTHh6bUlrRFRYS05zelJjCkxaQ1NPUUpKMTVDcjRSaUlyK2ZydXkwaEJQcjBZdmd2RDdUZHBuaUliaFlONnJRcnhXRWdLUkNiZnlTcUdmR2YKN0IwS1BNTWN6RkU2dXRBNTR2andseFA4VVZ4U1lENlBudUNTQmptOCthbVdwRkhpMkp0MzVKNjc5Y0kyc0tHUwoyMzh5WFd5ZDNRS0JnQUdMUElDY3ppYmE2czZlbUZWQVN5YWV6Q29pVWRsWUcwNHBNRktUdTJpSWRCUVgrMTBHCkNISUZTSmV2amZXRkV5eTl6Z0pjeWd5a2U4WmsrVndOam9NeVMraGxTYmtqYUNZVTFydUVtMVg3RWRNRGtqSnQKOExxTXBGUC9SVHpZeHI3eU1pSC9QSFI1andLbUxwayt0aUt4Y012WFlKSVpzSk1hWUdVVUZvUHBBb0dBU2JEcgpHY0VoK3JFUWdHZVlGOXhzeVpzM3JnY0xWcTNlN2tMYk5nOFdrK2lxb1ZCalRzaDRkWDRwTCtXR2xocngvZzdhCnBRWlF5RU85WHlWeWk1U3VBS01Cenk5WlVSRGhvdFBXWHV1aE5PZXNPOGdPRXFYenQyNXFEVmpoK2VrdnNhYzkKU2RzUlE0Z2pONnJQbEF0Y3d6dndLaEZua2ROUEE0aXlwS1VGMzdFQ2dZQURuVmt6MnFyMDRxNWtzaGRNcmI4RgpBUVJhSlhMcXBrZThvQUdsQ0pNMmVCZ1FZdjlEMWUxaXdSZFlrS0VUWmtIcXVaRDNVeVhLK1JPRU5uWjJHdkcwCmFJYlhHOTY4ZFhpZit6SzF3NmxkZWRCdGZIa1BTYTdCQ0ZCWURyaUc1NC9uTjZiWUFpem1NY2ZlWExlS0pPRG8KTHhTb1Iwek5NemZNVHFwYnhjdHZJUT09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
kind: Secret
metadata:
  name: tls-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  name: echoheaders-https
  labels:
    app: echoheaders-https
spec:
  type: NodePort
  ports:
  - port: 80
    # This port needs to be available on all nodes in the cluster
    nodePort: 30301
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders-https
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: echoheaders-https
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: echoheaders-https
    spec:
      containers:
      - name: echoheaders-https
        image: gcr.io/google_containers/echoserver:1.3
        ports:
        - containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  tls:
  # This assumes tls-secret exists.
  # To generate it run the make in this directory.
  - secretName: tls-secret
  backend:
    serviceName: echoheaders-https
    servicePort: 80

Wait for 10m:

$ k get po
NAME                      READY     STATUS             RESTARTS   AGE
echoheaders-https-408hq   1/1       Running            0          39s
echoheaders-https-lgopc   1/1       Running            0          39s
$ k logs echoheaders-https-408hq --follow
10.180.3.1 - - [10/Aug/2016:00:31:40 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"
10.240.0.4 - - [10/Aug/2016:00:31:40 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"
10.240.0.4 - - [10/Aug/2016:00:31:40 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"
10.240.0.4 - - [10/Aug/2016:00:31:40 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"

$ k describe ing test
Name:			test
Namespace:		default
Address:		130.211.5.194
Default backend:	echoheaders-https:80 (10.180.1.6:8080,10.180.3.7:8080)
TLS:
  tls-secret terminates 
Rules:
  Host	Path	Backends
  ----	----	--------
  *	* 	echoheaders-https:80 (10.180.1.6:8080,10.180.3.7:8080)
Annotations:
  static-ip:			k8s-fw-default-test--ff1107f83ed600c0
  target-proxy:			k8s-tp-default-test--ff1107f83ed600c0
  url-map:			k8s-um-default-test--ff1107f83ed600c0
  backends:			{"k8s-be-30301--ff1107f83ed600c0":"HEALTHY"}
  forwarding-rule:		k8s-fw-default-test--ff1107f83ed600c0
  https-forwarding-rule:	k8s-fws-default-test--ff1107f83ed600c0
  https-target-proxy:		k8s-tps-default-test--ff1107f83ed600c0
Events:
  FirstSeen	LastSeen	Count	From				SubobjectPath	Type		Reason	Message
  ---------	--------	-----	----				-------------	--------	------	-------
  5m		5m		1	{loadbalancer-controller }			Normal		ADD	default/test
  3m		3m		1	{loadbalancer-controller }			Normal		CREATE	ip: 130.211.5.194

$ curl https://130.211.5.194 -k
CLIENT VALUES:
client_address=10.180.3.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://130.211.5.194:8080/

SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001

HEADERS RECEIVED:
accept=*/*
connection=Keep-Alive
host=130.211.5.194
user-agent=curl/7.35.0
via=1.1 google
x-cloud-trace-context=980f144c1b67df172bfb0b8d2e513a2e/7292388594410828040
x-forwarded-for=104.132.1.91, 130.211.5.194
x-forwarded-proto=https

Chech that the ip allocated to the lb is static:

$ gcloud compute addresses list  --global | grep -i 130.211.5.194
k8s-fw-default-test--ff1107f83ed600c0          130.211.5.194    IN_USE

Ingress 2

Switch over to the second cluster:

$ usecluster gke_kubernetesdev_us-central1-f_ingress-us
$ kk get configmaps -o template ingress-uid --template '{{.data}}'
map[uid:ff1107f83ed600c0]

Create the same ingress with a key modification:

metadata:
  name: test
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "k8s-fw-default-test--ff1107f83ed600c0"

where the k8s stuff is the name of the global static ip allocated for the other ingress. Create this ingress

$ k create -f ing.yaml
$ k get ing
NAME      HOSTS     ADDRESS         PORTS     AGE
test      *         130.211.5.194   80, 443   1m

As I do this I realize it would've been a better idea to invert the order and create the GKE ingress first, because I can debug the e2e cluster by sshing into the master. meh.

$ k describe ing test 
Name:			test
Namespace:		default
Address:		130.211.5.194
Default backend:	echoheaders-https:80 (10.152.1.3:8080,10.152.2.4:8080)
TLS:
  tls-secret terminates 
Rules:
  Host	Path	Backends
  ----	----	--------
  *	* 	echoheaders-https:80 (10.152.1.3:8080,10.152.2.4:8080)
Annotations:
  https-target-proxy:		k8s-tps-default-test--ff1107f83ed600c0
  target-proxy:			k8s-tp-default-test--ff1107f83ed600c0
  url-map:			k8s-um-default-test--ff1107f83ed600c0
  backends:			{"k8s-be-30301--ff1107f83ed600c0":"Unknown"}
  forwarding-rule:		k8s-fw-default-test--ff1107f83ed600c0
  https-forwarding-rule:	k8s-fws-default-test--ff1107f83ed600c0
Events:
  FirstSeen	LastSeen	Count	From				SubobjectPath	Type		Reason	Message
  ---------	--------	-----	----				-------------	--------	------	-------
  3m		3m		1	{loadbalancer-controller }			Normal		ADD	default/test
  2m		2m		1	{loadbalancer-controller }			Normal		CREATE	ip: 130.211.5.194
  2m		2m		1	{loadbalancer-controller }			Warning		GCE	googleapi: Error 400: The resource 'projects/kubernetesdev/global/firewalls/k8s-fw-l7--ff1107f83ed600c0' is not ready, resourceNotReady

I guess the firewall event is spurious, or something, could use some debugging but the count is 1 and I don't see anything wrong with the firewall.

$ k get po 
17:45:56-beeps~/goproj/src/k8s.io/kubernetes/pkg/registry/service] (clean)$ k get po
NAME                      READY     STATUS    RESTARTS   AGE
echoheaders-https-0u0ue   1/1       Running   0          2m
echoheaders-https-f4w8z   1/1       Running   0          2m

$ k logs echoheaders-https-0u0ue --follow
10.152.2.1 - - [10/Aug/2016:00:49:43 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"
10.240.0.5 - - [10/Aug/2016:00:49:43 +0000] "GET / HTTP/1.1" 200 395 "-" "GoogleHC/1.0"

Moment of truth:

$ k get ing 
NAME      HOSTS     ADDRESS         PORTS     AGE
test      *         130.211.5.194   80, 443   5m

$ curl https://130.211.5.194 -k
CLIENT VALUES:
client_address=10.240.0.5
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://130.211.5.194:8080/

Success!

Issues

well actually there're the following major issues:

$ gcloud compute backend-services describe k8s-be-30301--ff1107f83ed600c0
affinityCookieTtlSec: 0
backends:
- balancingMode: UTILIZATION
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/kubernetesdev/zones/asia-east1-b/instanceGroups/k8s-ig--ff1107f83ed600c0
  maxUtilization: 0.8
- balancingMode: UTILIZATION
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/kubernetesdev/zones/us-central1-b/instanceGroups/k8s-ig--ff1107f83ed600c0
  maxUtilization: 0.8
creationTimestamp: '2016-08-09T17:27:28.477-07:00'
  • The default backends don't share the same node port cross cluster (kk get svc default-http-backend) so each ingress controller will try to GC the others. We either need to give them the same port, or prevent GC by piping a flag (to turn this off: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/backends/backends.go#L83)
  • Cleanup. Deleting an ingress in one cluster will result in the controller trying to delete the ingress globally. This will fail. The controller should keep retrying. I don't know if it'll just work out at the end.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment