Currently, Octopus deploys NuGet packages to destination machines, and handles all the orchestration around the deployment.
From an application packaging point of view, a NuGet package is similar to a Docker container image:
- To build a NuGet package, we take your code, compile it, and bundle the results into a package, and give it a version stamp.
- To build a Docker image, we take a Dockerfile, and run
docker build
, and it creates an image with a version stamp
From a deployment orchestation point of view, the two are also very similar:
- We want to build a NuGet package once, then deploy it many times (test, staging, prod, across many machines). The NuGet package contains everything we need to run the app (except the OS/runtimes) so we can have confidence that what we test is what we put in prod. After testing in test, we wouldn't want to recompile our code from scratch to deploy to prod.
- We want to build a Docker image once, then deploy it many times (test, staging, prod, across many hosts). The image contains everything to run the package, including the OS, so we are guaranteed that what we tested is what is going to prod. After testing in test, we wouldn't want to build a new image to deploy to prod.
So Docker support in Octopus would mean:
- In addition to "Deploy a NuGet package" steps, we have "Deploy a Docker image" steps
- In addition to NuGet feeds that provide packages, we have Docker registries
- When you create a release, as well as choosing the version of NuGet packages, you can choose the version of Docker images
- As you promote the release through test/staging/prod, as well as using the same NuGet packages, you'll use the same Docker images
- Docker hosts become machines that you manage in the Octopus environments tab
Some things we need to figure out:
- Image distribution. Do we expect all docker hosts to pull images directly from the registry, or will Octopus pull images and transfer them directly to the hosts?
- Configuration. Can we assume all configuration is built into the image or provided externally (when docker run is called) or will we need to modify images slightly?
I have started doing CI/CD using Bamboo and Octopus recently and have got some knowledge in how things works. My understanding is that application is always comprised of 3 elements: binaries, configurations and data. So when we talk about CI/CD, it's always about how do we build/deploy those 3 elements. I will talk through them in a bamboo/octopus settings under docker context.
Now docker.
As this article says, a docker container image is equivalent of a nuget package (it's actually a tar ball), so we should treat it the same way as a nuget package. When bamboo finishes building, we should push the artifacts somewhere. Since we are using docker, we should call
docker push registry/image-name:tag
at the end of a build job. I don't think Octopus provide docker registry service now, so we either push a public docker registry (docker hub) or a private one. Also note here that the container image created here is environment agnostic.When we start doing deployment, we should update docker container with updated environment settings, same as what did for neguet packages.. Unfortunately due to docker's design, we can't mutate container image's content, we can only build another layer on top existing image and override it (thanks to union file system), which is easy to get done via docker-compose.yml
I can then simply send these 3 files to a web server and then run a
All done! (I have verified above code and it
works on my local machine
)