Skip to content

Instantly share code, notes, and snippets.

@rsmitty
Last active March 25, 2016 14:13
Show Gist options
  • Save rsmitty/aa04bc897f818af011e2 to your computer and use it in GitHub Desktop.
Save rsmitty/aa04bc897f818af011e2 to your computer and use it in GitHub Desktop.

Rolling Updates

Here's how a rolling update works:

  • A Deployment specification is created (YAML file), where labels are specified for Pods that will be created. In this example assume app:nginx, tier:frontend, version:1.0.

  • A strategy of RollingUpdate is also specified

  • The Deployment is created with kubectl create -f deployment.yaml

  • Under the hood a Replication Controller is created and the 1.0 Pods are deployed with the appropriate # of replicas.

  • A Service spec is created to expose the pods that have now been deployed. Things like "expose this on port 80". The kicker is the Label Selector. You define a Selector that is one step above the version for this example. app:nginx, tier:frontend.

  • Service is created with kubectl create -f service.yaml

  • Now say we want to update our nginx to v2.0, so the deployment.yaml file is updated to reflect a new Docker image to use, as well as changing the labels to app:nginx, tier:frontend, version:2.0.

  • We deploy the updated yaml with kubectl apply -f deployment.yaml.

  • If all names for the deployment and everything are the same (they should be) the rolling upgrade begins to occur.

  • Under the hood, a new Replication Controller is created (possibly with the same name).

  • One by one a new v2.0 Pod is created and an old v1.0 Pod is torn down when the new one is active.

  • The Service has never changed since it only points to app:nginx, tier:frontend and has thus served both versions for the entirety of the deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment