Created
October 1, 2015 17:37
-
-
Save bprashanth/a1eaa27ef28f2c7fee0f to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
```yaml | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: echoheadersx | |
labels: | |
app: echoheaders | |
spec: | |
type: NodePort | |
ports: | |
- port: 80 | |
nodePort: 30301 | |
targetPort: 8080 | |
protocol: TCP | |
name: http | |
selector: | |
app: echoheaders | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: echoheadersdefault | |
labels: | |
app: echoheaders | |
spec: | |
type: NodePort | |
ports: | |
- port: 80 | |
nodePort: 30302 | |
targetPort: 8080 | |
protocol: TCP | |
name: http | |
selector: | |
app: echoheaders | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: echoheadersy | |
labels: | |
app: echoheaders | |
spec: | |
type: NodePort | |
ports: | |
- port: 80 | |
nodePort: 30284 | |
targetPort: 8080 | |
protocol: TCP | |
name: http | |
selector: | |
app: echoheaders | |
--- | |
apiVersion: v1 | |
kind: ReplicationController | |
metadata: | |
name: echoheaders | |
spec: | |
replicas: 1 | |
template: | |
metadata: | |
labels: | |
app: echoheaders | |
spec: | |
containers: | |
- name: echoheaders | |
image: bprashanth/echoserver:0.0 | |
ports: | |
- containerPort: 8080 | |
``` |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
GCE Load Balancer Controller
A controller that orchestrates the life-cycle of GCE L7 Load Balancers based on Kubernetes Ingress.
Disclaimer:
Overview
GCE have a single resource representing an L7 loadbalancer. To achive L7 loadbalancing through kubernetes, we employ a resource called
Ingress
. Each Ingress creates the following GCE resource graph:Global Forwarding Rule -> TargetHttpProxy -> Url Map -> Backend Service -> Instance Group
The L7 controller manages the lifecycle of each component in the graph. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, the Instance Group and Backend Services are shared across L7s. This allows fanout, whereby you can acquire a single public IP from GCE and use it to route traffic to various backend services based on the url path+hostname.
Implementation details
The controller manages cloud resources through a notion of pools. Each pool is the representation of the last known state of a logical cloud resource. Pools are periodically synced with the desired state, as reflected by the kubernetes api. When you create a new Ingress, the following happens:
Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the kubernetes nodes are a part of the instance group, and so on. Since Backends are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to backend services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend.
Creation
Before you can start creating Ingress you will need a loadbalancer/Ingress controller. We can use the rc.yaml in this directory:
$ kubectl create -f rc.yaml replicationcontroller "gcelb" created $ kubectl get pods NAME READY STATUS RESTARTS AGE gcelb-xxa53 1/1 Running 0 12s
A couple of things to note about this controller:
The loadbalancer controller will watch for Services, Nodes and Ingress. Nodes already exist (the nodes in your cluster). We need to create the other 2. You can do so using the ingress-app.yaml in this directory.
A couple of things to note about the Ingress:
You can tail the logs of the controller to observe its progress:
When it's done, it will update the status of the Ingress with the ip of the L7 it created:
Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel:
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see
Health status will display here once configuration is complete.
the L7 is still bootstrapping. Wait till you haveHealthy instances: X
. Even though the GCE L7 is driven by our controller, which notices the kubernetes healtchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:You can also edit
/etc/hosts
instead of using--resolve
.Updates
Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at /foo to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec:
and replace the existing Ingress:
A couple of things to note about this particular update:
A note on resilience
The loadbalancer controller executes a control loop, it uses the kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. This means you can go to the GCE UI and disconnect links in the graph, and the controller will fix them for you. An easy link to break is the url map itself, but you can also disconnect a target proxy from the urlmap, or remove an instance from the instance group (note this is different from deleting the instance, the loadbalancer controller will not recreate it if you do so). Modify one of the url links in the map to point to another service and wait till the controller sync (this happens as frequently as you tell it to, via the --resync-period flag which defaults to 30s). Note that the GCE api itself won't allow you to delete resources that have dependencies, but you can break links that black hole traffic in interesting ways, and that's exactly what the controller will stop you from doing. The same goes for the kubernetes side of things, the api server will validate against obviously bad updates, but if you, say, relink an Ingress so it points to the wrong backends the controller will blindly follow.
Deletion
Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources alltogether.
Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
GCE BackendServices are ref-counted and deleted by the controller as you delete Kubernetes Ingress'. If you want to delete everything in the cloud when the loadbalancer controller pod dies, start it with the --quit-on-sigterm flag. When a pod is killed it's first sent a SIGTERM, followed by a grace period (set to 10minutes for loadbalancer controllers), followed by a SIGKILL. The controller pod uses this time to delete cloud resources. If there is a failure in this stage, just recreate and kill the pod, or send a GET to its /quit endpoint.
Troubleshooting:
A typical sign of trouble is repeated retries in the logs:
Wishlist: