Skip to content

Instantly share code, notes, and snippets.

@simbo1905
Last active November 11, 2018 08:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save simbo1905/d27ba7218a2fd71fcb4f0fcede82fd6d to your computer and use it in GitHub Desktop.
Save simbo1905/d27ba7218a2fd71fcb4f0fcede82fd6d to your computer and use it in GitHub Desktop.
Starting Up On OpenShift with Laravel and ReactJS www.realworld.io demo apps

Introduction

This is the code that goes with the presentation at https://docs.google.com/presentation/d/1kQhNGoVdoXhnsfwnsgmEBYMftVYR1Vx8QqdCqWSZXiI

These instructions are how to setup what I demoed in that presentation. It is a realistic backend API and frontend SPA. Due to time constraints my presentation demo will start with a working setup and the section "Some 'real world' things to demo in the presentation". All the steps to set everything up are below.

The demo code we will deploy is part of the www.realworld.io project where different people write alternative interoperable frontends or backends. Below I have chosen ReactJS and Laravel but you could use any of the demo apps that use different langages and frameworksgs

Prerequisites Instructions

Here we fork the demo code repos (so that we can setup GitHub web hooks to CI build the code) and create and OpenShift account to run it on.

Step 1 of 3: Sign into github and fork these two repos:

  1. https://github.com/simbo1905/react-redux-realworld-example-app
  2. https://github.com/simbo1905/laravel-realworld-example-app

Step 2 of 3: Sign up for a free "Starter" account at www.openshift.com

  1. Go to http://www.openshift.com
  2. Click the top right button "SIGN UP FOR FREE".
  3. Click the blue link "Sign up for OpenShift Online" below the sign in buttpon.
  4. Ignore the username/password. Scross down and click the GitHub logo under "or sign in with" (or another ID provider, YMMV).
  5. Fill in a valid email address and other mandatory "Account Information" and agree the Ts&Cs.
  6. You will need to open the email they send and click on the link to confirm your email address.
  7. Whne you login you should see two blue buttons, one for "Starter" an one for "Pro". You probably want the free "Starter".
  8. Click the robot check then "Confirm Subscription"
  9. You should get a screen saying that you are queued for provisioning
  10. Refresh the page in a couple of minutes to see if your get the blue "Open Web Console" button

Provisioning took less than three minutes when I ran thought this. YMMV.

Step 3 of 3: Add the "oc" commandline tool to your path:

  1. Open the "Web Console", ignore the busy page, click on the question mark in the ciricle on the very top right and select "Command Line Tools".
  2. Click on a blue "Download oc:" link selecting your opperating system. For Macos its a 42M tar.
  3. Extract the file and put it in the folder you want to run it from e.g., "~/openshift".
  4. Add the folder where you put it to your PATH in a terminal e.g. export PATH=$PATH:~/openshift

Creating a unique project on the cluster

Here we login on the commandline and create a unique project using your initials.

Open a terminal where you have "oc" on the PATH. Open the "Web Console" in your browser. Ignore the busy page and click on the question mark in a circle icon on the very top right. "Command Line Tools". then click on the "copy" button on the top of the first text box which is an "oc login" command. Paste it into the terminal and run it. If things are working it should say that "You are not a member of any projects".

Now comes the real work:

# openshift projects must be unique on a cluster. so lets try your initials as a name prefix
export YOUR_INITIALS=<some-three-char>

# here we hope that your initials are unique enough on the cluster to create a unique project name
export PROJECT=${YOUR_INITIALS}-staging

# if you don't have a unique name you will be told "already exists". try adding a digit to the end of the prevous command and repeat
oc new-project $PROJECT

Build & Run The "RealWorld.io" ReactJS Frontend Demo

Here we deploy your clone of the ReactJS SPA demo that you setup in the prerequisites section above.

In the same shell as above cd to a folder where you can clone and edit the ReactJS code:

# set a env var for pointing to your first forked repo (i.e. edit this to have your username):
export YOUR_REPO_JS=https://github.com/<your-username>/react-redux-realworld-example-app

# in a folder where you want to checkout the code
git clone $YOUR_REPO_JS realworld-react

# go into the checked out folder
pushd realworld-react

# create an openshift branch to deploy from
git checkout -b openshift

# push the branch to GitHub so that OpenShift can deploy from it. This may force a login. 
git push --set-upstream origin openshift

# create everthing with
NAME=frontend ./create-openshift.sh

You can watch the build kick-off followed by a deploy on the Web Console under "Monitoring". It took about 2 minutes 6 seconds for me. Under Monitoring you can see each container launch and expand the view to see its logs.

Note lets setup a GitHub Webhook to kick-off a build when we push to the openshiftbranch. Copy down the url output by:

oc describe bc frontend | grep "URL.*webhooks.*github" | awk '{print $2}'

That should be a very long URL. Github go to Settings and select Webhooks and add a json webhook with at "Payload URL" the one output above and "Content type" of json. Now when you push the the openshift branch of your repo it will kick off a build which if successful will do a deploy. You can now release code by merging into the openshift branch.

Build & Run The "RealWorld.io" Laravel Backend Demo

Here we deploy your clone of the Laravel API demo that you setup in the prerequisites section above. This demo is more complex than the frontend demo as it needs a database and secret credentials to connect to it.

In the same shell as above cd to a folder where you can clone and edit the Laravel code:

# set a env var for pointing to your first forked repo (i.e. edit this to have your username):
export YOUR_REPO_PHP=https://github.com/<your-username>/laravel-realworld-example-app.git

# in a folder where you want to checkout the code
git clone $YOUR_REPO_PHP realworld-laravel

# go into the checked out folder
pushd realworld-laravel

# create an openshift branch to deplo[y from
git checkout -b openshift

# push the branch to GitHub so that OpenShift can deploy from it. This may force a login. 
git push --set-upstream origin openshift

Before we create the backend application we need:

  1. A database.
  2. A Kubernetes secret that holds the database connection details.

I don't recommend usesing the default database templates that come with OpenShift except for experimentation. So I haven't script this step. For a real project I would suggest either an AWS RDS instance or a www.compose.com deployment in same region as your OpenShift instance. For this walkthrough you can setup a database using the OpenShift Web Console. Use "Add To Project" dropdown on the top right "Browse Catalog" and filter by database and run the MySQL template. Set "Memory Limit" of the template to "256Mi". Copy to clipbard the connection details that the template shows before you hit "Close" that looks something like:

     Username: userIC0
     Password: xxxxx
Database Name: sampledb
Connection URL: mysql://mysql:3306/ <- IGNORE THIS! Use the .svc DNS entry deleted below!

When you close the template use the left hand menu to open "Application" then "Service" and click on the "mysql" service. Under the details window it should show the "IP" as private address range and a "Hostname" of something like "mysql.sjm-staging.svc". This is a priave DNS entry within the cluster that resolves to the private IP. We can ignore the "Connection URL" we were shown by the database setup wizard and instead use the hostname format "mysql.$PROJECT.svc" for Laravel to connect to mysql. Edit the .env.example file putting in the database "Username", "Password", "Database Name" and the "Hostname" from the above steps then:

# create a secret with the name backend using the contents of .env.example
NAME=backend ./create-env-secret.sh .env.example

With the database credentials loaded into the secret backed we can now create our application using the same name:

# create our laravel build and deploy that will mount the secret as env vars to the pods
NAME=backend ./create-openshift.sh

For bonus marks setup a Git Webhook with a payload URL matching the output of:

oc describe bc backend | grep "URL.*webhooks.*github" | awk '{print $2}'

We need a database schema for the code to run. With Laravel you do this by running php artisan migrate. We can run a fire-and-forget command in one of the php containers with:

oc exec  $( oc get pod --selector='name=backend' | awk 'NR==2{print $1}') php artisan migrate

Our PHP backend should now come up at the URL out by:

oc describe route backend | grep Host | awk '{print $3}'

Finally we need to actually tell the frontend ReactJS application to use the laravel backend API that we just deployed. We also need to tell the backend Larabel API to allow CORS requests from the frontend. We can use oc describe route to list the URL of the apps and use the output to value an environment variable on the other:

oc set env dc frontend API_ROOT=http://$(oc describe route backend | grep Host | awk '{print $3}')/api
oc set env dc backend CORS_ALLOWED_ORIGINS=$(oc describe route frontend | grep Host | awk '{print $3}')

And you should be done!

Some "real world" things to demo in the presentation

Scale processes

The reactjs app is a memory hog so with the free teir we only just have enough memory to run one mysql pod, one backend pod and one reactjs pod. On the Overview page we can scale down the reactjs app to zero pods then scale up the backed to be two using the arrows on the right hand side of the blue circles.

Don't forget to reset them back to one pod each to perform the following steps!

Open a remote shell into a pod running on the cluster

We can see a list of running backend pods with:

oc get pod --selector='name=backend'

Which outputs something like:

  $ oc get pod --selector='name=backend'
  NAME               READY     STATUS    RESTARTS   AGE
  backend-48-vqjdp   1/1       Running   0          1d
  backend-48-zkhdf   1/1       Running   0          1d

So we can use awk to print the first column of the second row from a subshell and pass that to oc rsh to login to the pod:

oc rsh  $( oc get pod --selector='name=backend' | awk 'NR==2{print $1}')

That can be helpful to run php artisan tinker in the pod to get a php REPL with the application loaded. For example try the following to cound the number of users and to load the attrbutes of a user from the database:

App\User::count();
App\User::where('username', 'simbo1905')->first();

Run a single command in a pod running on the cluster

We can use the same lookup of a backend pod to run the command php artisan migrate as a fire-and-forget oc exec:

oc exec $( oc get pod --selector='name=backend' | awk 'NR==2{print $1}') php artisan migrate

Foward a local post to the private database service

You need to run a mysql tool to see this working by connecting to localhost:

oc port-forward $(oc get pods --selector name=mysql | awk 'NR==2{print $1}') 3306:3306

Simulate a deadlock on the backend to see it get restarted

If we login to the webconsole and look at the "Monitoring" section there is a stream of events on the right sorted by newest decending. We can also run a background command to see the event stream:

oc get events -w

Now lets stop all the processes in the backend pod to simulate a global deadlock. The events should see the health checks fail then the docker pod killed and replaced automatcially in less than a minute. If we refresh a browser looking at the frontend we should see it timeout on data during the simulated deadlock then recover when kubernetes replaces the unhealhty process:

oc rsh  $( oc get pod --selector='name=backend' | awk 'NR==2{print $1}')
ps -efw | grep httpd |awk '{print $2}' | xargs kill -STOP

When I tried this it took less than a minute for the site to recover. With a real business you should scale to multiple instance so that a problem with one process locking up and getting restarted will only effect the minority of users.

Simulate a misconfigured deployment of the backend app

The "readiness check" will be polled to ensure that a new pod is ready before it swapped into the load balanced pool. We can set this to poll a url that checks that the application can reach the database and any other configured resource. If we are using a rolling deployment, and the ready check doesn't see the new process working, the deployment will be cancelled. This means that we won't break our site!

As memory is limited the template uses a "Recreate" deployment policy that stops the current pod before starting the new one so as not to go over memory quota. So to see the readiness check working first:

  1. Scale down the frontend to zero processes to free up some memory using the triangle arrows to the rigth of the blue circle on the Overview page.
  2. Go to "Applcation -> Deployments -> Backend" and use the right hand Actions button to "Edit YAML" and change the type: Recreate strategy to be type: Rolling. Then click the "Deploy" button and go back to the Overiew to see a successful rolling deployment.
  3. Now break the readiness check so simuilate a misconfigured rolling deployment. Go to "Applcation -> Deployments -> Backend" and use the right hand Actions button to "Edit Health Checks" and set the "Readiness Probe -> Path" to be "/api/oops". This should cause a rolling deployment to happen but if you look at the "Monitoring" under events it will show that the new pod is healthy and eventually abort the deployment.

Don't forget to change the ready check back to /api/tags, change the deployment back to Recreate and scale the frontend back to one!

Look at the git log commit comment that a running pod was built from

The s2i build that makes the runnable application image bakes the git commit SHA into an environment variables OPENSHIFT_BUILD_COMMIT. So we can lookup a pod, then run env in that pod to grep out the OPENSHIFT_BUILD_COMMIT, then locally run git log and grep out what the commit comment was with:

git log | grep -A 4 $(oc exec $(oc get pods --selector name=backend | awk 'NR==2{print $1}') env | grep -i OPENSHIFT_BUILD_COMMIT | awk 'BEGIN {FS = "=" }{print $2}')

This command is helpful when you are scripting promotions between staging and live projects to confirm on the command line after promotion that the expected git comit of code is running in the live environment.

Note that if you are locally on the wrong branch you wont find the SHA in the local git log. This is helpful as it is telling you that you don't have the correct git branch checked out locally!

Clone the staging environment to create a live one

NOTE Openshift starter free tier probably doesnt have a enough memory to run a full copy of the environment. You will need a Pro subscription.

We can dump out the configuration object from our project and load them into a new one. This allows us to create a ”live” project as a copy of objects in our ”staging” project. Rather than copy the build objects that create our application container images we can move the images between projects. To clone the project created at the top of this gist we would need to create a new database and load the fresh database connection details into a a fresh secret. That is scripable but for brevity in this example I will only clone the ReactJS app:

# openshift projects must be unique on a cluster. so lets try your initials as a name prefix
export YOUR_INITIALS=<some-three-char>

# capture the name of your current staging project we will copy
export STAGING_PROJECT=$(oc project --short)

# here we hope that your initials are unique enough on the cluster to create a unique live project name
export LIVE_PROJECT=${YOUR_INITIALS}-live

# if you don't have a unique name you will be told "already exists". try adding a digit to the end of the prevous command and repeat
oc new-project $LIVE_PROJECT

# we want the live project service account to be in the image puller role for the staging project to be able to copy built and tested container images
oc policy add-role-to-group system:image-puller system:serviceaccounts:$LIVE_PROJECT -n $STAGING_PROJECT

# now the actual copy. here we only export the frontend objects and we replace references form the old project to the new
oc export is/frontend svc/frontend dc/frontend -n $STAGING_PROJECT -o yaml | \
  sed "s/$STAGING_PROJECT/$LIVE_PROJECT/g" | oc create -n $LIVE_PROJECT -f - 

# we need a fresh public access route for the copied objects
oc expose svc/frontend -n $LIVE_PROJECT

# finally the copied object is configure to use an api running in the same project that hasn’t been copied. So we can reconfigure the frontend to use a public API
oc env dc/frontend -n $LIVE_PROJECT API_ROOT=https://conduit.productionready.io/api

The application wont deploy as we don't yet have a built and tested application image promoted from the staging environment. Lets to that next!

Promote images between staging and live

If you just ran the previous section you should have two environment variables defined $STAGING_PROJECT and $LIVE_PROJECT. With those we can tag images in the staging project as the latest image in the live project. That will trigger a deployment in the live project:

# create a tag for the image we are promoting. you should create your own meaningful tag
TAG=$(date +"%Y-%m-%d_%H-%M-%S") 
# tag the latest built frontend image in staging
oc tag $STAGING_PROJECT/frontend:latest $STAGING_PROJECT/frontend:$TAG
# tag that image as the "latest" in live (whih is a special tag that is watched and deployed)
oc tag $STAGING_PROJECT/frontend:$TAG $LIVE_PROJECT/frontend:latest

Rolling back

To rollback you need to have had more than one deployment in live. For demo purposes it is easier to have multiple deployments in staging to rollback rather than copy them up to live to rollback.

Note The documentation suggests that you can simply use oc rollback backend to roll-back. The catch is that rollback creates a new deployment. This means that if the last time you deployed the app you deployed it twice due to any transient failed deployments you can run that command repeatedly deploying the same code. So the instructions below rollback to an explicit version number where you can see the age of all the available deployments using oc describe dc frontend to figure out exactly which one you want to rollback to.

oc project $STAGING_PROJECT

First lets chack the current commit sha in a running pod so that we can see that rollback:

git log | grep -A 4 $(oc exec $(oc get pods --selector name=frontend | awk 'NR==2{print $1}') env | g
rep -i OPENSHIFT_BUILD_COMMIT | awk 'BEGIN {FS = "=" }{print $2}')

Note lets list the deployment we can roll back to:

oc describe dc frontend | grep Deployment

That should output more than 1 deployment so that we can rollback to the previous one. Make a note of the oldest (smallest) deployment number and rollback to it. Here I assume its #1:

oc rollback frontend --to-version=1

You can watch it deploy then check the commit comment again to see that the running pod is running older code:

git log | grep -A 4 $(oc exec $(oc get pods --selector name=frontend | awk 'NR==2{print $1}') env | g
rep -i OPENSHIFT_BUILD_COMMIT | awk 'BEGIN {FS = "=" }{print $2}')

Rollbacks turn off the deployment triggers so that you don't accidently upgrade to a broken version. Once you have a working version to push out you reset the deployment triggers with:

oc set triggers dc/frontend --auto

A/B testing

Lets setup a second experimental frontend to A/B test in our ReactJS working folder:

git checkout -b experimental
# make a trival change to change the commit hash so that we can see it change during A/B testing
echo "" >> README.md
git commit -am "trival change"
git push

Note deploy it as a second frontend:

NAME=frontend2 ./create-openshift.sh

Now following the video here See https://blog.openshift.com/running-ab-tests-openshift-demo/ manually create a new route frontab that will have some generated name such as frontab-sjm-staging1.b9ad.pro-us-east-1.openshiftapps.com. Then turn off sticky load balancing so that its easy to see traffic from your laptop flipping between services:

oc annotate route frontab  haproxy.router.openshift.io/balance=roundrobin

You need to follow the video to edit the created route to get it to split traffic. Split 50/50 then run this curl against the route you created hittin the '/commit' endpoint that will show you the commit hash. You should see it output different hashes:

for i in {1..200}; do curl http://frontab-sjm-staging1.b9ad.pro-us-east-1.openshiftapp
s.com/commit; echo "" ; done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment