This document describes how to leverage docker build for multiple use cases. The two use cases covered are:
Use Case | Description | Command |
---|---|---|
Remote Source | Build your app image in the most reproducible way possible. Repeat: this case must be painless for users and as reproducable as possible. Assumes nothing is available in your local docker build context and instead the context is fetched from a URL. This is the build to use when creating a tagged image for a repository. | docker build https://github.com/<user>/<project>.git#<tag> |
Local Source | Build your app image assuming that the local docker build context includes the source files you need. This can be convenient during development, but can be problematic in some CI environments that have different assumptions about build context. Leverages docker COPY and can potentially take advantage of the docker build cache. |
docker build . |
See Also: DockerComposeStrategy
Docker multi-stage builds is used to define a builder stage and separate runner stage. The builder stage relies on the build context, which should be a URL for a tagged source version for production or a local directory for development.
ARG BUILD_IMAGE=gradle:7.4-jdk17
ARG RUN_IMAGE=quay.io/wildfly/wildfly:26.0.1.Final
################## Stage 0
FROM ${BUILD_IMAGE} as builder
ARG CUSTOM_CRT_URL
USER root
WORKDIR /
RUN if [ -z "${CUSTOM_CRT_URL}" ] ; then echo "No custom cert needed"; else \
wget -O /usr/local/share/ca-certificates/customcert.crt $CUSTOM_CRT_URL \
&& update-ca-certificates \
&& keytool -import -alias custom -file /usr/local/share/ca-certificates/customcert.crt -cacerts -storepass changeit -noprompt \
&& export OPTIONAL_CERT_ARG=--cert=/etc/ssl/certs/ca-certificates.crt \
; fi
COPY . /app
RUN cd /app && gradle build -x test --no-watch-fs $OPTIONAL_CERT_ARG
################## Stage 1
FROM ${RUN_IMAGE} as runner
ARG RUN_USER=jboss
USER root
COPY --from=builder /app/docker-entrypoint.sh /docker-entrypoint.sh
COPY --from=builder /app/build/libs /opt/jboss/wildfly/standalone/deployments
RUN chown -R ${RUN_USER}:0 ${JBOSS_HOME} \
&& chmod -R g+rw ${JBOSS_HOME}
USER ${RUN_USER}
ENTRYPOINT ["/docker-entrypoint.sh"]
See Also: Working Example
In this scenario nothing is provided by the host (other than Docker and Internet access) and the Docker build context is fetched via URL argument to `docker build. With this strategy the app is programmatically checked out of source control (GitHub), all the OS dependencies needed are fetched (example: wget/curl/apt-get/yum install), and then the build is executed (example: Gradle, Make, python setup). For maximum repeatability the source checkout should be done with a tagged version.
In this scenario more assumptions are made about the host and specifically the Docker build context provided. In particular, the source code and related files are assumed to be available locally to COPY into the image. The build is still run inside the container however, alleiviating users from having to install build tools on the host machine and providing solid reproducibility. There can be issues with cross-platform line endings, and file system permissions still. Interference from a build executed on the host machine is possible as well (stale build artifacts from host could inadvertantly be copied into image). The main advantage of this scenario is testing local changes without having to create a tagged version and push to GitHub to test the changes. It often requires the use of a .dockerignore file to selectively COPY files from the local directory.
Multi-stage build Dockerfiles are great, but sometimes you actually do need a separately built and cached image. Often optimizing build time is important if you need to re-run the build frequently and caching cannot be done in CI. In this case using a base image (perhaps named Dockerfile-base) that contains all of the OS-level package installation and is cached on DockerHub or equivalent can speed up the build by leaving only the direct application source code in the build.
Often integration tests need themselves to be run in a docker container because they rely on build tools such as pytest or JUnit. In this scenario a separate Dockerfile (perhaps named Dockerfile-test) OR a separate build target (--target) can be useful.
Two popular platforms that support Docker image building are GitHub and DockerHub.
See: https://github.com/marketplace/actions/build-and-push-docker-images
Requires a DockerHub team account.
See: https://docs.docker.com/docker-hub/builds/
You can store images on various repositories. DockerHub works reasonably well since it's the default repo for Docker for Desktop. One downside is the git repo README is only synced if you have a DockerHub team account (otherwise you have to manually copy and paste, or punt on keeping the README synced in both GitHub and DockerHub, or use a GitHub Action - see JeffersonLab/myquery#4). GitHub offers a container image repo as part of it's generic artifact repo, but it requires users have a GitHub account to access so is less appealing (actually quite detrimential to the quick start with compose use case).