Skip to content

Instantly share code, notes, and snippets.

@PaulStovell
Last active October 7, 2017 16:59
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save PaulStovell/3eb5cc8f6df4653ed11a to your computer and use it in GitHub Desktop.
Save PaulStovell/3eb5cc8f6df4653ed11a to your computer and use it in GitHub Desktop.

Octopus + Docker in a nutshell

Currently, Octopus deploys NuGet packages to destination machines, and handles all the orchestration around the deployment.

From an application packaging point of view, a NuGet package is similar to a Docker container image:

  • To build a NuGet package, we take your code, compile it, and bundle the results into a package, and give it a version stamp.
  • To build a Docker image, we take a Dockerfile, and run docker build, and it creates an image with a version stamp

From a deployment orchestation point of view, the two are also very similar:

  • We want to build a NuGet package once, then deploy it many times (test, staging, prod, across many machines). The NuGet package contains everything we need to run the app (except the OS/runtimes) so we can have confidence that what we test is what we put in prod. After testing in test, we wouldn't want to recompile our code from scratch to deploy to prod.
  • We want to build a Docker image once, then deploy it many times (test, staging, prod, across many hosts). The image contains everything to run the package, including the OS, so we are guaranteed that what we tested is what is going to prod. After testing in test, we wouldn't want to build a new image to deploy to prod.

So Docker support in Octopus would mean:

  1. In addition to "Deploy a NuGet package" steps, we have "Deploy a Docker image" steps
  2. In addition to NuGet feeds that provide packages, we have Docker registries
  3. When you create a release, as well as choosing the version of NuGet packages, you can choose the version of Docker images
  4. As you promote the release through test/staging/prod, as well as using the same NuGet packages, you'll use the same Docker images
  5. Docker hosts become machines that you manage in the Octopus environments tab

Some things we need to figure out:

  1. Image distribution. Do we expect all docker hosts to pull images directly from the registry, or will Octopus pull images and transfer them directly to the hosts?
  2. Configuration. Can we assume all configuration is built into the image or provided externally (when docker run is called) or will we need to modify images slightly?
@serbrech
Copy link

It's clear how it aligns with nuget. and that shows how easily it will be to use octopus and do business as usual, or as before, with nuget.
However, I am a bit skeptical in the need for something like octopus, when you enter the Container world.
When we deploy nuget packages, we need to unzip, we need to put it in a folder, and archive them, etc...
But all this is gone with containers, because the container host manages this. when talking about clusters, mesos, or swarm manages what runs where. tagging of hosts, the roles and the hosts affinities, are also things cluster management software handles (swarm or mesos).
How is Octopus going to integrate with this? I think that this is important to figure out.
I don't think it is enough to map container <--> nuget. If it stops there, then the user is missing out on some of the real value of containers.

@jstangroome
Copy link

I like your general approach.

Typically per-environment configuration is omitted from Docker images and instead provided at run-time through one of, or a combination of, environment variables, mapped volumes, and linked containers. I think Octopus should enable this approach.

Last time I looked at the Docker Registry protocol it was reasonably straight-forward and it may be sensible for the Octopus server to act as the default registry and let the Docker hosts pull the images that way... especially when images inherit from other images or only the last layers change.

As @serbrech mentioned, Mesos, Kubernetes, etc abstract the Docker host concept and I'm not sure how a Tentacle would fit here. Having the Octopus server act as a registry that a Mesos cluster could pull from would help. Perhaps Octopus could interact with a Mesos cluster or Docker Swarm as just another Deployment Target Type.

Octopus may not support all Docker cluster implementations but given that Mesos supports Windows it would probably be a good candidate.

@PaulStovell
Copy link
Author

Yeah, just like we can deploy a NuGet package to Tentacles, or Azure Web Apps, I expect we'll start with deploying Docker images to Docker hosts, but then also to things like Mesos.

@marianoc84
Copy link

Hi to all, so how did the story ends? Can I, with current Octopus version use Docker images as NuGet packages for deploy? Or should I go towards something like kubernetes?

@CumpsD
Copy link

CumpsD commented Jun 30, 2016

Would love to know what is the plan for this too? Since we are moving more and more to using Docker images for everything, the need for Octopus is decreasing

@michaellandi
Copy link

michaellandi commented Aug 11, 2016

Any update on this? Now that .net core has been released I expect more projects will be moving into the world of docker. I feel like there is a natural upgrade path for people making this jump if their was docker support in Octopus. Without it, people are going to be forced to either leave Octopus for other docker deployment/configuration tools, or hold off moving to docker.

@michaellandi
Copy link

Also, just a follow up to your post:

I'd definitely prefer to see configuration done in Octopus. Maybe somehow this plays into replacing variables inside of a docker-compose configuration file for that environment. One of the strengths of Octopus IMO is the ability to do configuration management. If our docker configurations are stored in compose files, they have to be stored and managed somewhere else. Having all of the configuration for both standalone applications and containers all in one place would be an awesome feature!

@seertenedos
Copy link

To add to this we are moving to the cloud and looking to use docker with mono and CoreClr. We are a big octopus user so we really don't want to move away from that if we can help it.

@bchenSyd
Copy link

bchenSyd commented Oct 21, 2016

I have started doing CI/CD using Bamboo and Octopus recently and have got some knowledge in how things works. My understanding is that application is always comprised of 3 elements: binaries, configurations and data. So when we talk about CI/CD, it's always about how do we build/deploy those 3 elements. I will talk through them in a bamboo/octopus settings under docker context.

  • Binaries are environment agnostic, and is handled by Bamboo/Jenkins. Yes we may create different binaries for different branches, but this has nothing to do with environment. We do that just because we want to deliver different features. Under traditional settings, the build result, or artifacts, is normally uploaded to Octopus repository as a nuget package or zip file or a tar ball.
  • Configuration is a pure environment concept and is handled by Octopus. Octopus should replace the default settings with settings that are specific to an environment. Once configuration customisation is done, it push the updated configuration along with binaries to the target environment and it's job is done.
  • Data is a persistent layer, and shouldn't be re-created during CI/CD process. However, the data schema may change. It's also octopus' job to update target server's data schema.

Now docker.

As this article says, a docker container image is equivalent of a nuget package (it's actually a tar ball), so we should treat it the same way as a nuget package. When bamboo finishes building, we should push the artifacts somewhere. Since we are using docker, we should call docker push registry/image-name:tag at the end of a build job. I don't think Octopus provide docker registry service now, so we either push a public docker registry (docker hub) or a private one. Also note here that the container image created here is environment agnostic.

When we start doing deployment, we should update docker container with updated environment settings, same as what did for neguet packages.. Unfortunately due to docker's design, we can't mutate container image's content, we can only build another layer on top existing image and override it (thanks to union file system), which is easy to get done via docker-compose.yml

#Here I'm using docker-compose.yml to build another layer which only contains settings on top of the existing container image.

docker-compose.yml

version: '2'
services:
    onboarding-frontend:
        container_name: onboarding-prod-1.0
        labels:
            com.bambora.version: "prod-1.0"
            com.bambora.releaseNote: "onboarding production 1.0"
        build: .
        image: onboarding:prod-1.0
        ports:
            - 8000:80

Dockerfile
#syd-linux-01:443/onboarding is pushed by bamboo during build
FROM syd-linux-01:443/onboarding
#now we need to replace the configuration to a environment specific one
RUN  rm -rf /public/config_env.json
COPY ./config_env.json  /public/
#keep the bootstrap command the same
CMD ["nginx","-g","daemon off;"]


config_env.json
{
  API_HOST:"http://my-environment-specific-api-host-addr"
}

I can then simply send these 3 files to a web server and then run a

docker-compose up

All done! (I have verified above code and it works on my local machine)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment