*This Dockerfile is intended for SvelteKit applications that use adapter-node. So, the Dockerfile below assumes that you have already installed and configured the adapter.
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY . .
RUN npm run build
RUN npm prune --production
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/build build/
COPY --from=builder /app/node_modules node_modules/
COPY package.json .
EXPOSE 3000
ENV NODE_ENV=production
CMD [ "node", "build" ]
Dockerfile
.dockerignore
.git
.gitignore
.gitattributes
README.md
.npmrc
.prettierrc
.eslintrc.cjs
.graphqlrc
.editorconfig
.svelte-kit
.vscode
node_modules
build
package
**/.env
Let's break down this Dockerfile and explain its various parts:
Note that we're doing what's called a multistage build, hence the two FROM
statements and the AS builder
part in the first one.
This is highly recommended for applications that have a build step (such as SvelteKit apps), as it drastically reduces the size of the final image.
To learn more about multistage builds, see official Docker documentation on multistage builds.
FROM node:18-alpine AS builder
We are setting node:18-alpine
as the base image of our first stage, but you can of course change this, or use another version, if you see fit.
The AS builder
part is simply assigning a custom name to this first stage, so that we can reference it later in the second stage more easily.
The alpine
variant is based on Alpine Linux, which is a minimal Linux distro used mainly in containers because of its small size.
And even though the size of the base image you use in the first stage ultimately makes no difference to the final image size — since the second stage is all that remains in the end — it's normally a good idea to use the same base image in the build stage as the final stage (or a derivative of it), so that you can be sure that the build artifacts that are created in the first stage, will be compatible with the base image used in final stage.
Note that there's some tradeoffs that come with choosing Alpine that you should probably be aware of. Alpine uses musl and BusyBox instead of the de-facto standard glibc and GNU Core Utilities — if you don't know what these are, that's fine, this just means that some programs might not run or might run differently on Alpine than they do on bigger distributions like Ubuntu and Debian, but Node.js applications should be fine for the most part. For an overview of the differences between Docker images for various distros (e.g. Alpine, Debian, Ubuntu, etc.), check out this article.
If you don't want to use Alpine, you can remove the -alpine
suffix from the image name, leaving only node:18
, which defaults to the Debian-based image. If you decide to do this, remember to do the same for the next stage as well.
WORKDIR /app
This instruction is pretty self-explanatory. We're simply setting the working directory to /app
, so that we don't have to repeat it for each subsequent COPY
instruction.
COPY package*.json .
We copy the package.json
+ package-lock.json
files into the working directory.
The wildcard character *
here is for convenience, it's to avoid explicitly naming both files, like COPY package.json package-lock.json .
.
RUN npm ci
We run the npm ci
command to install all the dependencies, based on the newly copied package-*.json
file(s).
As you can see, we're using npm ci
as opposed to the regular npm install
, since the former is more fit for productions environments. For more information, see npm ci in official NPM documentations.
Note that we can't do npm ci --production
— which doesn't install the dev dependencies — because the SvelteKit package itself is a dev dependency and we need it for building the app.
COPY . .
This instruction copies the rest of the source files into the working directory.
The reason we copied the pakcage-*.json
file(s) and installed the dependencies first, is because we want to take advantage of Docker's layer caching mechanism, which we couldn't do if we copied everything in one go. See this.
This is a very common practice when writing Dockerfiles for Node applications and you'll see it pretty much everywhere.
RUN npm run build
RUN npm prune --production
We run the npm run build
command so that SvelteKit generates the build
directory, containing a standalone Node server that serves our application — assuming, of course, that you're using adapter-node.
We subsequently run the npm prune --production
command to delete all the dev dependencies from the "node_modules" folder, since we no longer need any of them.
Now that we have our build
directory and its contents ready, we can begin the final stage, which is responsible for running our application.
FROM node:18-alpine
Here, we're using the same Alpine-based image we used in the first (build) stage. If want to use Debian-based images, you can also use node:18-slim
(which is actually bigger than node:18-alpine
, but smaller than node:18
) to have a smaller image size. The slim
variant doesn't contain things like npm
, for instance, it only includes node
itself. So, if you do need npm
in your final image, stick to the non-slim variants.
WORKDIR /app
Same as the one in the first stage. Sets the working directory to /app
.
COPY --from=builder /app/build build/
COPY --from=builder /app/node_modules node_modules/
COPY package.json .
The adapter-node documentation states the following:
You will need the output directory (build by default), the project's package.json, and the production dependencies in node_modules to run the application.
And so here, we are copying the node_modules
directory (stripped of all the dev dependencies, since we did npm prune --production
in the previous stage), the package.json
file, and of course the build
directory, from the previous stage into the working directory.
Note that some Dockerfiles (the one in this article, for instance) make the mistake of copying everything from the first stage over to the second one, including the source files, this is completely redundant, unnecessarily increases the size of the image, and basically defeats the whole point of using multistage builds. Only the artifacts that are strictly necessary for running the application should be copied over to the final stage, everything else should be left out.
EXPOSE 3000
According to the adapter-node documentation, the default port that the application will run on is 3000, assuming this, we are exposing the port 3000 via this instruction.
You can also change the port by assigning a different number to the PORT
environment variable, but there's no need to do that in this case.
ENV NODE_ENV=production
Here we set the NODE-ENV
variable to production
, to let Node.js and other code that we're about to run know that this is a production environment.
CMD [ "node", "build" ]
We finally run the node build
command — equivalent to node build/index.js
— to start the server.
Also, here's a great YouTube video on this subject, building the same Dockerfile step-by-step: Containerize SvelteKit NodeJs with Docker
You are correct. You do need a process to build first. However I don't ever want to build inside a container unless it's part of a build toolchain. In my case, I have a publish.sh script that runs an npm run build then packages up the build files and sends them off to my deployment server. The deployment server will then run the small Dockerfile I include.
This is the reason I do things this way:
I build locally in WSL on my machine. This isn't the same environment as my server. The separation of build and and dependency cleanup ensures that the apps will always get deployed.
Now what I could be doing is building with CI/CD tooling, including the dev dependency cleanup, then creating a docker image and pushing it to a registry, where the image could get get deployed as a container. But that requires dependency on outside toolchain, i.e., GitHub Actions. I don't that if I'm deploy directly from my local machine as a one man team.
I hope that helps. In a nutshell, you are correct. You need to build prior to the dev dependency cleanup. And of course, you can do both in the Dockerfile as separate build steps. My needs different from most.
--- Another option I have --
I could also just build the image locally and push to my container registry, but doing so ads file size overhead to my pushes and I would still be required to trigger a deployment, perhaps with a webhook to my CapRover server. Given how iffy my internet connection can be sometimes and never knowing if I might be using my phone's hotspot, the smallest possible file size for deployment is best for me. It also ensures that the least amount of work is required of the CapRover server to deploy under the scenario.
--- Further ---
I am building a whole new native Docker Swarm deployment tool called Capitano that will address everything, but that's a topic for another place and time, not here or now.