Skip to content

Instantly share code, notes, and snippets.

@PaulStovell
Last active October 7, 2017 16:59
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save PaulStovell/3eb5cc8f6df4653ed11a to your computer and use it in GitHub Desktop.
Save PaulStovell/3eb5cc8f6df4653ed11a to your computer and use it in GitHub Desktop.

Octopus + Docker in a nutshell

Currently, Octopus deploys NuGet packages to destination machines, and handles all the orchestration around the deployment.

From an application packaging point of view, a NuGet package is similar to a Docker container image:

  • To build a NuGet package, we take your code, compile it, and bundle the results into a package, and give it a version stamp.
  • To build a Docker image, we take a Dockerfile, and run docker build, and it creates an image with a version stamp

From a deployment orchestation point of view, the two are also very similar:

  • We want to build a NuGet package once, then deploy it many times (test, staging, prod, across many machines). The NuGet package contains everything we need to run the app (except the OS/runtimes) so we can have confidence that what we test is what we put in prod. After testing in test, we wouldn't want to recompile our code from scratch to deploy to prod.
  • We want to build a Docker image once, then deploy it many times (test, staging, prod, across many hosts). The image contains everything to run the package, including the OS, so we are guaranteed that what we tested is what is going to prod. After testing in test, we wouldn't want to build a new image to deploy to prod.

So Docker support in Octopus would mean:

  1. In addition to "Deploy a NuGet package" steps, we have "Deploy a Docker image" steps
  2. In addition to NuGet feeds that provide packages, we have Docker registries
  3. When you create a release, as well as choosing the version of NuGet packages, you can choose the version of Docker images
  4. As you promote the release through test/staging/prod, as well as using the same NuGet packages, you'll use the same Docker images
  5. Docker hosts become machines that you manage in the Octopus environments tab

Some things we need to figure out:

  1. Image distribution. Do we expect all docker hosts to pull images directly from the registry, or will Octopus pull images and transfer them directly to the hosts?
  2. Configuration. Can we assume all configuration is built into the image or provided externally (when docker run is called) or will we need to modify images slightly?
@michaellandi
Copy link

Also, just a follow up to your post:

I'd definitely prefer to see configuration done in Octopus. Maybe somehow this plays into replacing variables inside of a docker-compose configuration file for that environment. One of the strengths of Octopus IMO is the ability to do configuration management. If our docker configurations are stored in compose files, they have to be stored and managed somewhere else. Having all of the configuration for both standalone applications and containers all in one place would be an awesome feature!

@seertenedos
Copy link

To add to this we are moving to the cloud and looking to use docker with mono and CoreClr. We are a big octopus user so we really don't want to move away from that if we can help it.

@bchenSyd
Copy link

bchenSyd commented Oct 21, 2016

I have started doing CI/CD using Bamboo and Octopus recently and have got some knowledge in how things works. My understanding is that application is always comprised of 3 elements: binaries, configurations and data. So when we talk about CI/CD, it's always about how do we build/deploy those 3 elements. I will talk through them in a bamboo/octopus settings under docker context.

  • Binaries are environment agnostic, and is handled by Bamboo/Jenkins. Yes we may create different binaries for different branches, but this has nothing to do with environment. We do that just because we want to deliver different features. Under traditional settings, the build result, or artifacts, is normally uploaded to Octopus repository as a nuget package or zip file or a tar ball.
  • Configuration is a pure environment concept and is handled by Octopus. Octopus should replace the default settings with settings that are specific to an environment. Once configuration customisation is done, it push the updated configuration along with binaries to the target environment and it's job is done.
  • Data is a persistent layer, and shouldn't be re-created during CI/CD process. However, the data schema may change. It's also octopus' job to update target server's data schema.

Now docker.

As this article says, a docker container image is equivalent of a nuget package (it's actually a tar ball), so we should treat it the same way as a nuget package. When bamboo finishes building, we should push the artifacts somewhere. Since we are using docker, we should call docker push registry/image-name:tag at the end of a build job. I don't think Octopus provide docker registry service now, so we either push a public docker registry (docker hub) or a private one. Also note here that the container image created here is environment agnostic.

When we start doing deployment, we should update docker container with updated environment settings, same as what did for neguet packages.. Unfortunately due to docker's design, we can't mutate container image's content, we can only build another layer on top existing image and override it (thanks to union file system), which is easy to get done via docker-compose.yml

#Here I'm using docker-compose.yml to build another layer which only contains settings on top of the existing container image.

docker-compose.yml

version: '2'
services:
    onboarding-frontend:
        container_name: onboarding-prod-1.0
        labels:
            com.bambora.version: "prod-1.0"
            com.bambora.releaseNote: "onboarding production 1.0"
        build: .
        image: onboarding:prod-1.0
        ports:
            - 8000:80

Dockerfile
#syd-linux-01:443/onboarding is pushed by bamboo during build
FROM syd-linux-01:443/onboarding
#now we need to replace the configuration to a environment specific one
RUN  rm -rf /public/config_env.json
COPY ./config_env.json  /public/
#keep the bootstrap command the same
CMD ["nginx","-g","daemon off;"]


config_env.json
{
  API_HOST:"http://my-environment-specific-api-host-addr"
}

I can then simply send these 3 files to a web server and then run a

docker-compose up

All done! (I have verified above code and it works on my local machine)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment