Skip to content

Instantly share code, notes, and snippets.

@cliftonc
Created April 24, 2015 00:13
Show Gist options
  • Save cliftonc/6d869b41e30b592664d9 to your computer and use it in GitHub Desktop.
Save cliftonc/6d869b41e30b592664d9 to your computer and use it in GitHub Desktop.
Getting Started at TES

Getting Started at TES

At TES our approach to building technology is focused on enabling individuals and small teams to deliver small, iterative change to our products quickly, without fear.

Like any simple statements, it hides a lot of complexity, twists and turns that have been taken trying to get us here. We're not there yet, with your help we will get closer day by day.

First things first, we need to get you to the point that you can commit code and deploy to production.

Pre-requisites

Before you get started, it is assumed that you have managed to find yourself a Mac or Linux laptop and can get to a terminal.

NVM and Node

We use nvm to manage node versions, as without it your laptop will quickly become a mess of strangely installed versions of node in even stranger locations.

Simply visit: nvm and follow the installation instructions. Don't forget to add the environment configuration to your shell configuration (typically ~/.bash_profile) to ensure that it persists across restarts.

Once installed, install node via:

nvm install v0.10.36
nvm alias default v0.10.36

TES NPM Registry

We have our own private registry at TES, this allows us to publish modules that can't be made public. It is a proxy for the public registry, so you can still install any public module after this has been configured.

npm config set registry http://npm.tescloud.com

If you ever travel and / or want to work off the VPN, you may want to install your own local proxy (see Sinopia) on your laptop and proxy that to the registry above.

Boot2Docker / Docker

Assuming you are on a mac, you need to get Docker installed. Docker is a very lightweight container based virtualisation tool that we use both in development as well as the deployment method to our 'platform servers' in the live environments.

Mac's can't natively run Docker, so we use the Boot2Docker project (a lightweight wrapper around VirtualBox) to provide this capability for us on our Macs. You can get it here: boot2docker

If you are using Linux, just install Docker via your favourite package manager. e.g.

apt-get install docker

Initialise Boot2docker

If on your mac, you need to intialise boot2docker before it works:

boot2docker init

Add port forwarding rules to the boot2docker VM

Finally, you should add the following generic port forwarding rules. What these do is allow any services running within the boot2docker virtual machine to be accessible on localhost, effectively making the experience of developing on a Mac within boot2docker equivalent to running Docker directly on Linux (as we do in our live environments):

boot2docker suspend
VBoxManage modifyvm boot2docker-vm --natpf1 "docker,tcp,127.0.0.1,2375,,2375"
VBoxManage modifyvm boot2docker-vm --natpf1 "ssh,tcp,127.0.0.1,2022,,22"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-nginx,tcp,127.0.0.1,8080,,8080"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-nginx-service,tcp,127.0.0.1,8081,,8081"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-mongo,tcp,127.0.0.1,27017,,27017"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-mongo-admin,tcp,127.0.0.1,28017,,28017"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-rabbit,tcp,127.0.0.1,5672,,5672"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-rabbit-admin,tcp,127.0.0.1,15672,,15672"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-redis,tcp,127.0.0.1,6379,,6379"
VBoxManage modifyvm boot2docker-vm --natpf1 "z-mysql,tcp,127.0.0.1,3306,,3306"
boot2docker resume

You can verify that this is all working correctly (at any time):

docker ps

If the above step fails, ensure that you have exported the DOCKER_HOST, DOCKER_CERT_PATH and DOCKER_TLS_VERIFY settings as prompted by boot2docker. To see the prompt again type:

boot2docker stop
boot2docker start

Ensure these are added to your ~/.bash_profile in the same way as nvm to ensure it persists across terminal sessions.

Introducing Bosco and His Friends

Bosco will be a close and constant companion at TES. Named after BA Baracus from the A Team, he ain't no fool and will help you with your projects and workflow at TES.

His two best friends are gulp (who helps him with building projects) and pm2 (who helps him with running node projects). You need to get them installed.

npm install bosco gulp pm2 -g

Setting Up Bosco

Bosco works by connecting to your github account, and synchronising the projects within teams in github with workspaces on your laptop.

To get started:

bosco setup

It will ask initially for:

Configuration Description
Github name Your github username
Github Auth Key A key that gives read access to the repositories in the organization (you can set this up here: https://github.com/blog/1509-personal-api-tokens).

This is then saved in a configuration file locally on disk, default is in ~/.bosco/bosco.json, so all subsequent commands use it.

After you have added these details it will connect to Github, retrieve the list of teams that you belong to and ask you to setup your first workspace.

Select a team, then provide a path to a folder (defaults to . - the current directory). This will link that team to this folder, so that when you run commands within it that Bosco will know what team you are in.

If you aren't in a correct workspace folder, just ctrl-c to quit, create a new folder, go into that folder and run:

bosco team setup

This will allow you to repeat the above process.

Getting your team moving

To get started with your team, run the following command from the root of the workspace:

bosco morning

tl;dr This command is your 'I want everything to be completely in sync with my team mates' command.

It will actually run 4 commands sequentially:

bosco clone
bosco pull
bosco install
bosco activity

In order, this:

  1. Clones any repositories that are in the Github team but not in your local workspace.
  2. Pulls any changes in repositories that were in your workspace but may have changed remotely, including pulling any Docker images that may have been updated.
  3. Runs npm install to ensure that they are all up to date.
  4. Gives you an overview of what happened with any of your projects since you last ran bosco morning.

Running your application

Now that your local workspace is in sync with your team, you can run up your application.

To run everything:

bosco run

This will use Docker to run any dependent infrastructure pieces (e.g. mongodb or rabbitmq), and pm2 to run up the node services in the background.

To stop everything:

bosco stop

To see what is running:

bosco ps

To tail the logs of a specific service:

pm2 logs service-name

To stop just one service:

bosco stop -r service-name

To restart just one service

bosco restart -r service-name

To watch just one service (reload on change)

bosco start -w service-name

Accessing your application

Apps within TES are mostly Node apps, that are run up concurrently on different ports. This does mean that we have a fun game of find a new port every time we create a new app, and remember the port if we want to access a service directly. You'll love it.

To find out the port of your application look in the default configuration.

You will find:

    "server": {
        "host": "0.0.0.0",
        "port": 5461,
        "workers": 2
    }
 } ```

This means after it is running you can check its status:

[http://local.tescloud.com:5461/status](http://local.tescloud.com:5461/status)

By default, we use nginx to route specific url paths to specific services.  This means that the project ```infra-nginx-gateway``` will be included in almost all github teams.

Nginx is made available on port ```8080```, so you can access it via:

[http://local.tescloud.com:8080/teaching-resources-hub](http://local.tescloud.com:8080/teaching-resources-hub)

## Page Composer

In addition to the common use of Nginx to manage the routing of specific urls to services, we use a project called Page Composer [service-page-composer](https://github.com/tes/service-page-composer) to do ... unsurprisingly ... page composition.

This means that almost all teams will have the project above added to their team in github.  By default it runs on port 5000, so like nginx, you can access page composer directly via:

[http://local.tescloud.com:5000/teaching-resources-hub](http://local.tescloud.com:5000/teaching-resources-hub)

### Composition

As soon as you start breaking large applications into a smaller number of services and smaller applications, you quickly encounter the problem that you need to join them back together to create any meaningful sort of product for a Teacher.

I wrote a blog post on this that explains it reasonably simply, along with a follow on video of the talk I did at both Full Stack and Nodeconf:

[https://medium.com/@clifcunn/nodeconf-eu-29dd3ed500ec](https://medium.com/@clifcunn/nodeconf-eu-29dd3ed500ec)

At it's heart, page composer is simply a caching reverse proxy that will parse the HTML that parses through it for specific directives, and fetch content as per those directives from other services.

It has two types of directive:




## Static Assets

Assuming you have managed to get your application running, and can see the page via nginx, you will probably notice that the page doesn't look right.

INSERT SCREENSHOT HERE

The reason for this is that we need to run an additional Bosco command to get the static assets served up locally.

bosco cdn


This command will run across all of the services that you have in your team, look for all those with a ```bosco-service.json``` and serve up all of the static assets for you as if you were running Cloudfront locally.

If you wait for it to complete, and then refresh the page above for your application you will see that it probably now has all the style it should have in the real world.

For example:

{ "service": { "name": "resource-api", "dependsOn": [ "infra-rabbitmq" ] }, "tags": [ "upload", "dashboard", "summary" ], "build": { "command": "node_modules/.bin/browserify src/browser-client.js -o dist/browser-client.js", "watch": { "command": "node_modules/.bin/watchify src/browser-client.js -o dist/browser-client.js --debug -v", "ready": "written" } }, "assets": { "basePath": "dist", "js": { "resource-api-client": [ "browser-client.js" ] } } }


This is the configuration file for a resource service.  It contains all the usual Bosco configuration, but it contains two new sections.

### bosco-service.json: Build

This section appears if your project has a build step for front end assets (e.g. gulp, grunt or browserify).  Bosco will run this command before serving up the assets defined in the next section.

The watch part of this configuration is what is called if you elect to type:

bosco cdn -w resource-api


Which will invoke the watch command, instead of the build command, for any of the services that match the -w parameter, and so enable live reload with any changes.  

The ```ready``` parameter is a string that is some text that will appear at the very end of the watch command that Bosco can find to know that the command has finished rebuilding the assets.

### bosco-service.json: Assets

The assets section is where you define the bundles of javascript or css that will be served by Bosco in cdn mode, and later pushed up to S3 so that they can be served on the live site.

The ```basePath``` is the folder within which all of the subsequent paths are contained (it avoids duplication).

After this is a set of keys, for each type of asset (e.g. js or css), followed by a key with the name of the bundle you want to create, that has as its value an array of file names that will be grouped together into that bundle.

Javascript files are minified via Uglify2, CSS is concatenated and CleanCSS applied.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment