Skip to content

Instantly share code, notes, and snippets.

@rwngwn
Last active June 15, 2018 12:18
Show Gist options
  • Save rwngwn/977f871dd852dca952c2f1dfdf687b53 to your computer and use it in GitHub Desktop.
Save rwngwn/977f871dd852dca952c2f1dfdf687b53 to your computer and use it in GitHub Desktop.

Lab: CI/CD with blue green deployment

In this lab we are going to deploy 2 custom made applications on OpenShift:

  • demo-color-api - serves as a public facing app

  • demo-color-backend - serves data to demo-color-api

We will show you how to deploy these application on OpenShift, how to use Jenkins to achieve continuous integration and continuous delivery of these application on OpenShift.

Finally we will show you how to achieve deployment stability via Blue/Green deployment. Blue/Green deployment is a technique to address the challenges of continuous deployment like a minimizing downtime during new version enrollment. It also enables you to quickly rollback any changes as it is based about the idea of having two identical environment called "blue" and "green" and switching between them

Exercise: Deploy demo-color-backend

Prepare oc cluster

Run minishift start to get your cluster up and running

Run minishift console to get access to OpenShift console and copy oc login …​ snippet.

oc login ...

Create project

oc new-project <NAME>

We will deploy demo-color-backend application on our OpenShift cluster. You can find its code and deployment configuration on Github: https://github.com/prgcont/demo-color-backend.

As we mentioned above, we will need to deploy this service to two environments. We need to deploy DeploymentConfig, service and a Route.

To deploy a blue one, run:

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-backend/master/template.app.yaml COLOR=blue | oc apply -f -

To deploy a green one, run:

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-backend/master/template.app.yaml COLOR=green | oc apply -f -

Finally we need to deploy our pipeline, which is responsible for deploying our application and switching master route. To do it, run:

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-backend/master/template.pipeline.yaml | oc apply -f -

You can see whats happening in the cluster by invoking:

oc get all

Alternatively, you can use OpenShift web console to examine deployed resources.

Tasks:

  • Use cat and compare template definition with objects defined in corresponding OpenShift Projects

  • Login to Jenkins a look at the defined pipeline

Exercise: Deploy demo-color-api

This service serves as our public facing application we will access during workshop. It uses demo-color-backend via its master route.

As for the backend service, code, deployment configuration and pipeline definition can be found on Github: https://github.com/vpavlin/demo-color-api

Again, we will deploy blue and green apps, starting with blue:

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-api/master/template.app.yaml COLOR=blue | oc apply -f -

And a green one:

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-api/master/template.app.yaml COLOR=green | oc apply -f -

These will also deploy a master route which will allow us switch between deployments

We also need to configure our application. There is a URL expected in configmap.yaml file. The value should match the master route host value from https://github.com/prgcont/demo-color-backend (as generated by OpenShift). The value needs to be in a form http://$HOST:$PORT. You can obviously omit $PORT if it equals to 80.

First we’ll need to upload the config to OpenShift

oc apply -f https://raw.githubusercontent.com/prgcont/demo-color-api/master/configmap.yaml

Then you can either edit the ConfigMap directly in OpenShift console (Resources > Config Maps > demo-color-api, click Edit from the top-right menu) or by running

oc edit cm demo-color-api

At the end, we will again deploy the pipeline.

oc process -f https://raw.githubusercontent.com/prgcont/demo-color-api/master/template.pipeline.yaml | oc apply -f -

You can see whats happening in the cluster by invoking:

oc get all

Alternatively, you can use OpenShift web console to examine deployed resources.

Tasks:

  • Access demo-color-api applications via blue and green route.

  • Determine which deployment (blue or green is accessible via master route)

Exercise: Blue green deployment

Login to do OpenShift console and via Builds → Pipelines and stat demo-color-backend pipeline via Start Pipeline.

Once the build is finished, you should to your new deployment.

Tasks:

  • Do multiple demo-color-backend and demo-color-api deployment switches

  • Explain how blue and green deployment of demo-color-backend changed demo-color-api

  • Try to not proceed your deployment, what happens?

  • Try to patch master route manually

Exercise: Canary Deployment

Canary deployments serves a purpose, when you want to test a new features on a limited amount of users. In this style of deployment we will enhance our blue/green deployment pipeline to use canary deployment.

To perform it, we will edit our backend buildConfig:

oc edit bc/demo-color-backend-pipeline

and we will update pipeline to looks like:

def appName=""
def project=""
def tag="blue"
def altTag="green"
def verbose="true"
node ('master') {
  stage('Initialize') {
    appName=sh(script:'echo $JOB_BASE_NAME | sed "s/[^-]*-\\(.*\\)-[^-]*/\\1/"', returnStdout: true).trim()
    project=env.PROJECT_NAME

    active=sh(script: "oc get route ${appName} -n ${project} -o jsonpath='{ .spec.to.name }' | sed 's/.*-\\([^-]*\\)/\\1/'", returnStdout: true).trim()
    if (active == tag) {
      tag = altTag
      altTag = active
    }
  }

  stage('Build') {
    openshiftBuild(buildConfig: appName, showBuildLogs: "true")
  }

  stage('Deploy') {
    openshiftTag(sourceStream: appName, sourceTag: 'latest', destinationStream: appName, destinationTag: tag)
    openshiftVerifyDeployment(deploymentConfig: "${appName}-${tag}")
  }

  stage('Canary') {
  sh "oc set -n ${project} route-backends ${appName} ${appName}-${tag}=20 ${appName}-${altTag}=80"

  }

  stage('Verify') {
    def activeRoute = sh(script: "oc get route ${appName}-${tag} -n ${project} -o jsonpath='{ .spec.host }'", returnStdout: true).trim()
    try {
       input message: "Test deployment: http://${activeRoute}. Approve?", id: "approval"
    } catch (error) {
        sh "oc set -n ${project} route-backends ${appName} ${appName}-${tag}=0 ${appName}-${altTag}=100"
        currentBuild.result = 'ABORTED'
        error('Aborted')
      }
  }

  stage 'Promote'
  sh "oc set -n ${project} route-backends ${appName} ${appName}-${tag}=100 ${appName}-${altTag}=0"

}

Tasks:

  • Enhance pipeline to contain multiple canary steps, first 20% users, then 40% users, then full switch

  • Edit route ratio manually via command line and web console

Exercise: Load Balancing and Session Affinity

Session affinity can be very important in blue/green or canary deployments scnarios. OpenShift router (we will be speaking about HA proxy as it is the default option) can balance load based on following strategies:

  • roundrobin: Each endpoint is used in turn, according to its weight. This is the smoothest and fairest algorithm when the server’s processing time remains equally distributed.

  • leastconn: The endpoint with the lowest number of connections receives the request. Round-robin is performed when multiple endpoints have the same lowest number of connections. Use this algorithm when very long sessions are expected, such as LDAP, SQL, TSE, or others. Not intended to be used with protocols that typically use short sessions such as HTTP.

  • source: The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up. If the hash result changes due to the number of running servers changing, many clients will be directed to different servers. This algorithm is generally used with passthrough routes.

We will now change our router to distribute our requests via roundrobin scheme:

oc annotate route --overwrite  demo-color-backend  haproxy.router.openshift.io/balance=roundrobin

After that, start pipeline and try to access service periodically via curl, to see the results:

curl http://${IP}/api/v1/color

You should see different output for different curl calls.

Then we can use curl, to catch a cookie for a proper endpoint via:

curl -c cookie http://${IP}/api/v1/color

And we can reuse it to reach our endpoint all the time:

curl -b cookie http://${IP}/api/v1/color

Tasks:

  • Change load balancing back to source IP and show that even without cookies you’ll get same endpoint all the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment