This page is not a complete implementation, it's more a starting point to get you running, understand the basics and then improve uppon it.
You had build context issues yesterday on stream, you figured out the proper route with a .dockerignore
file altho you couldn't get it working.
You would have a .dockerignore
at the root of your monorepo looking to something like:
# Ignore everything
*
# Add lock file
!yarn.lock
# Add owned packages package.json
# Yes, Docker is amazing and has no issues Kappa
# https://github.com/moby/moby/issues/30018
!packages/account_server/package.json
!packages/game_server/package.json
!packages/matchmaker/package.json
!packages/overseer/package.json
# Add owned code
# Yes, Docker is still amazing and still has no issues Kappa
# https://github.com/moby/moby/issues/30018
!packages/account_server/src/*
!packages/game_server/src/*
!packages/matchmaker/src/*
!packages/overseer/src/*
With this kind of configuration, when using a build command like docker build -t test ../.. -f Dockerfile
from the overseer
directory, you'll not have the same issues and it'll be instantaneous. For example, with this exact file from the Bot Land monorepo, I get:
Sending build context to Docker daemon 1.462MB
Note: this .dockerignore
file can of course later be generated to avoid the manual repetitions.
If you're having build context issue, you can debug it and see exactly what's send to the Docker deamon with a Dockerfile
like this one:
FROM alpine
WORKDIR /context
ADD . .
CMD ["/bin/sh"]
And then run it to check what's in your build context exactly.
You can also replace the CMD
by ["find"]
altho this would list everything on stdout
.
This is a basic example to be considered only as a starting point. I tried to comment most steps so there is a reasonning behind each of them.
FROM node:8.12-alpine AS build
WORKDIR /app
# Build for production.
ENV NODE_ENV=production
# Copy only package.json & lockfile.
# We only copy these 2 files right now which means that 3rd party packages can
# be cached if these 2 files are not modified.
# If we were to copy our application code too, as our code is modified way more
# often than dependencies, the cache would be invalidated every time.
COPY yarn.lock .
COPY packages/overseer/package.json .
# Install all dependencies.
RUN yarn --frozen-lockfile
# Copy the application code.
COPY packages/overseer/src src
# Build the application.
RUN yarn build
FROM node:8.12-alpine
WORKDIR /app
# Build for production.
ENV NODE_ENV=production
# Copy only package.json & lockfile.
# Check explanations from build stage for more details.
COPY yarn.lock .
COPY packages/overseer/package.json .
# Install production only dependencies.
RUN yarn --frozen-lockfile --prod
# Clean the yarn cache.
# The `--no-cache` is at the RFC stage.
# https://github.com/yarnpkg/rfcs/pull/53
RUN yarn cache clean
# Copy our application code from the build state.
COPY --from=build /app/dist dist
# Delete lockfile.
RUN rm yarn.lock
# Start the application.
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint
ENTRYPOINT ["yarn"]
CMD ["start"]
Like I said, this is a starting point, tons of things are not handled or could be optimized:
- Using a yarn cache from the host locally, saving the yarn cache on Circle CI and sharing this cache between stages.
- Some packages don't have a build step, no
dist
orlib
folder and runs directly fromsrc
, it's annoying as you have to handle manually differently each apps rather than pretty much duplicating someDockerfile
. One potential admHack for packages without this step could be to just add an npmbuild
script like"cp -R src dist"
^^ - Bot Land npm registry authentication is not in the example.
- No ports are exposed in the example.
- No specific user are used to run the application in the example.
- I don't handle the equivalent of the property
initProcessEnabled
that you'll use in your task definition to avoid this issue which locally should be--init
fromdocker run
. - Etc.