In case you missed the first part in this series, [check this link here] to learn more about the command line interface that Docker ships with. We'll be using these commands in this section. If you are already familiar with Docker CLI, feel free to skip part 1 and jump right in.
So, now that we're familiar using Docker in the terminal, let's use what we've learned to Dockerize our application. I'll be using the 'out-of-the-box' app the the Vue CLI provides. (This is just because I love how simple their CLI is and I love working with Vue. These steps can easily be applied to whatever frontend application you're building.)
I'll be spinning up a quick sample application. Feel free to follow along.
vue create docker-demo
- Choose manually select features
In part 3, I will be discussing E2E testing with Cypress, so if you want to learn about that, this is the only configuration option that matters for the remainder of the series. However, if you care, I've chosen the following options
- vue-router, vuex, babel, eslint, unit-mocha, e2e-cypress
Once everything is installed, cd
into your new project folder, open your preferred IDE and let's dig in.
Tag:
v0.1.0
Before we dive straight in, we need to consider what is important for local development. For me, I want to make sure all my developers are using the same dependencies, and I don't need to worry about what version of node they have installed. At the same time, I want to make sure we retain the conveniences of HMR (Hot Module Reload) so that developers don't need to constantly refresh the application to see their changes reflected.
So, let's keep these requirements in mind as we progress so wed on't lose sight of what it is that provides us value.
In part 1, we used docker run
with an image name. In this case, however, we want to make a custom image from scratch so we can define our dependencies, ports, volumes, and much more.
Allow me to introduce, the Dockerfile
. This is the file that will outline the steps to build your custom image. Create a new file in the root of your project named Dockerfile
.
Below, I'll show you what the 'MVP' would look like, we'll talk about what each line means and does, then we'll look at how we can improve on it.
FROM node:9.11.1
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN npm install
ENV PORT=8080
CMD ["npm", "run", "serve"]
Tag:
v0.1.1
FROM
specifies the preexisting image on which to build your custom image. Since we are running a node application, let's choose one of their official Docker images. >FROM node:9.11.1
means our application image will start with the node v 9.11.1 imageWORKDIR
sets the working directoryCOPY
copies new files, directories or remote files into the container/image >COPY . /usr/src/app/
copies our entire workspace into the container/imageRUN
executes a command in a new layer on top of the current image and commits it >RUN npm install
executes our install script and saves the result as a new layerENV
sets environment variables >ENV PORT=8080
sets the environment variablePORT
for later useCMD
provides the default initialization command to run when your container is created/started >CMD ["npm", "run", "serve"]
sets this as our default command when we start our container
In this section, we will be building out longer and longer docker
commands that we'll run in terminal. This can be a pain to write over and over. Instead of that, let's use the npm commands to make our lives easier to run and re-run these commands as we prototype what our solution will look like.
In your package.json
, let's add a new command to our scripts
section:
"build:dev": "docker build -t martin/docker_demo_dev:latest ."
-t
is the tag flag. This will the the tag you can use in yourdocker run
command to reference this image file.
Now, run npm run build:dev
and at the end of the process, you should see:
Successfully tagged martin/docker_demo_dev:latest
Tag:
v0.1.2
Guess what! We just made our first custom image! Let's run it and make sure everything is working. In your terminal, let's execute docker run martin/docker_demo_dev:latest
If you're using Vue, like I am, you should see the following output:
App running at:
- Local: http://localhost:8080/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/
Let's try accessing http://localhost:8080 in our browser.
hmmm... It's not working..
What's wrong?
Well, the output that Vue gave us should be a hint. There's something new we didn't really cover in part one, flags. We need to add the -p
flag to map a port on our host machine to our container's port. When we visited localhost:8080 above, that is our host machine's port 8080. Our host machine has no way of knowing that we want to forward traffic on that port to our container.
Let's change our run
command to use this flag and while we're at it, let's add it to our package.json
as a script to make our lives easier.
Add the following statement to your package.json
.
"start:dev": "docker run -p 8080:8080 martindevnow/docker_demo_dev:latest"
-p <host-port>:<container-port>
This tells Docker that traffic to the host machine (i.e. via localhost) on port<host-port>
should be directed towards the container at the<container-port>
that you define.
Now, when we visit localhost:8080, we can see our VueCLI app running! Yay!
Tag:
v0.1.3
Let's go into our app and change something on the landing page. I'm just going to add an <h1>
tag to announce my satisfaction working with docker. Save that file, go back to your browser.... and ... no change...
Maybe if we refresh... ?
Nope...
Sigh...
If you look back to our Dockerfile
we copied the files that were on our host machine into our image. So, each time we run our image, it's using the state the files were in at the time we built the image. That means, if we want to see our changes, we would need to run npm run build:dev
and npm run start:dev
all over again... for EVERY CODE CHANGE!
Clearly this cannot stand. We need something to tell docker to use the files on our host machine in our container, that way any change we make on our host machine is reflected in our running container.
Let's go back to our package.json
and let's add a mount to it. Our new command should look like this:
"start:dev": "docker run -p 8080:8080 --mount type=bind,src=`pwd`,dst=/usr/src/app martindevnow/docker_demo_dev:latest"
This also means we can remove the COPY
line from our Docker file. Let's update that. But wait... now we're running npm install
in our Dockerfile
but there's no package.json
to define our dependencies... Does this mean we have to run npm install
on our host machine? Heck no!~
One thing we want to take advantage of is how Docker layers the images. If you watch as Docker is building your image, you can see a hash for each layer as it is completed. More than that though, is that we also have layer caching. If Docker can see that nothing has changed on that layer from a previous build (and pervious layers are also identical) then Docker will use a cached version of that layer, saving you and your developers precious time!
But how can we leverage this? One great way to think of it is from biggest to smallest. What is least likely to change in your app? Those should be run earlier in your Dockerfile to take advantage of this caching.
So, let's move our package.json
and package.lock
into our Docker image first. These are less likely to change compared to our actual components and HTML, etc. Let's also set our environment variables right off the bat.
This is what our updated Dockerfile
should look like:
# Base Image
FROM node:9.11.1
# Used by Node and Webpack
ENV NODE_ENV=development
# Specify container Port
ENV PORT=8080
# Copy Package.json to temporary location
COPY package*.json /tmp/
# Install dependencies here
RUN cd /tmp && CI=true npm install
# Setup our App directory
WORKDIR /usr/src/app
# Copy our dependencies to our App dir
RUN cp -a /tmp/node_modules /usr/src/app/
# Expose the container port
EXPOSE 8080
# Run our App
CMD ["npm", "run", "serve"]
Tag:
v0.2.0
Fill in the explanation of what we changed and why first time, not see much of a change, but as you rebuild, you'll notice it's faster, cache install away from workingdir, then copy in replace system specific dependencies
So, let's run our npm run build:dev
to rebuild our image using our updated custom Dockerfile
and see what changes.
The most important thing to notice is that we're no longer copying our code into our container. In that case, how do we run our app? Well, we need to mount our host's workspace into the Docker container. Instead of doing this in the Dockerfile
(because we don't want this in the image), we do this in our docker run
command (so it is in the container).
Let's update our npm run start:dev
to mount this folder. You should now have the following command in your package.json
"start:dev": "docker run -p 8080:8080 --mount type=bind,src=`pwd`,dst=/usr/src/app -v /usr/src/app/node_modules martindevnow/docker_demo_dev:latest",
Since we're only changing our docker run
command, we don't need to rebuild. Let's just test it out with npm run start:dev
.
It should be running smoothly for you, but let's put it up against a real test. After you confirm that your container is running, head to localhost:8080 to take a look. Now, in your editor, change a file (some template or something easy to notice in the browser). If everything was done correctly, you should see your browser update that change automatically without even needing to refresh!
Tag:
v0.2.1
mounted
pwd
or Present Working Directory to theWORKDIR
in docker. We also set the node_modules as a volume so that our local node_modules won't overwrite the one in the container.
I want to make a few final changes to some of our npm commands. Currently, if we run start:dev, we'll have more than one container running. This isn't necessarily a bad thing, but since we've hardcoded the port, we'll run into collisions and we'll end up consuming more resources than we need.
In order for us to easily stop and remove our old containers, we should name them so we can easily refer to them later. Additionally, I want to remove any existing container before starting up a new one.
Take a look at our two updated build and start commands:
"build:dev": "docker build -t martindevnow/${npm_package_name}_dev:latest .",
"start:dev": "docker rm mdn_${npm_package_name}_dev_container || true && docker run --rm -it -p 8080:8080 --mount type=bind,src=`pwd`,dst=/usr/src/app -v /usr/src/app/node_modules --name mdn_${npm_package_name}_dev_container martindevnow/${npm_package_name}_dev:latest"
Tag:
v0.2.2
First of all, in our build:dev
command, we now use ${npm_package_name}. This is an environment variable that npm will set when running an npm command in this repo. It is taken from the package.json
's name
field. You can see this being used throughout the remainder of this series.
Secondly, we added a --name
field to the end of our start:dev
command. This allows us to reference the name we set at the beginning of the command. We start the command with docker rm mdn_${npm_package_name}_dev_container || true
. Without the || true
, if there was no existing container, the npm command would fail and we wouldn't create our container. This is because we used &&
to chain our commands together. If the first one failed, we don't execute the second. This is why we added || true
.
Finally, we also added the --rm
and --it
flags to our start:dev
command. The --rm
flag tells Docker to remove the container if and when it is stopped. The --it
flag keeps the terminal live and interactive once the container is started.
Let's build a fresh copy and run it. You should see something like: Error: No such container: mdn_docker-demo_dev_container
This is perfectly fine. This is our failsafe container removal. our || true &&
allowed us to bypass this potential error and proceed with creating a new dev container.
If you try changing one of your file's templates, you'll see everything is still functioning as it should be.
While we won't go through deploying in this article, we need to plan out our Docker image for what production should look like. With Vue, there are different commands we run to build for development vs production. The same will be true for our Docker images.
First, let's setup the core of our Dockerfile. But before we can do that, we need to do something with our current Dockerfile or else we'll have name collisions. Let's rename our existing Dockerfile to Dockerfile.dev
. Of course, we will need to slightly update our npm command to build. In our npm command, we add -f Dockerfile.dev
right before the .
at the end. Your command should look like this:
"build:dev": "docker build -t martindevnow/${npm_package_name}_dev:latest -f Dockerfile.dev ."
Tag:
v0.2.3
In our project root, let's create the new Dockerfile. At the beginning, it will look very similar to what we had previously in our dev box.
FROM node:9.11.1
ENV NODE_ENV=production
COPY package*.json /tmp/
RUN cd /tmp && CI=true npm install
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN cp -a /tmp/node_modules /usr/src/app/
RUN npm run build
So far, it looks very similar to our dev Dockerfile. The main difference is the RUN command at the end. Here, we are building for production, so we use the internal command to build the final assets within the image itself. This is different from using CMD
like we did above.
Let's also add a command to build this to our package.json
"build:prod": "docker build -t martindevnow/${npm_package_name}:latest ."
Tag:
v0.3.0
So, if we build it, we now have a container. And if we run it, we get nothing. At this point of the game, I'd like to introduce Docker's multi-stage builds.
Note: This feature requires Docker
v17.05
or greater
Basically, multi-stage builds allow us to have two images in one Dockerfile. The nice part is that these are easily targetable from the command line, and the images can layer on top of each other or pull pieces from earlier stages.
Let's add a builder
stage and extend it to a prod
stage in our latest Dockerfile. Here's our new Dockerfile:
# Builder
FROM node:9.11.1 as builder
ENV NODE_ENV=production
COPY package*.json /tmp/
RUN cd /tmp && CI=true npm install
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN cp -a /tmp/node_modules /usr/src/app/
RUN npm run build
# Make production build
FROM node:9.11.1 as prodBuild
RUN npm install -g http-server
WORKDIR /app
COPY --from=builder /usr/src/app/dist .
EXPOSE 80
CMD [ "http-server", "-p", "80", "/app" ]
Tag:
v0.3.1
We can see that we now have two FROM
lines and they're both aliased to a name we can refer to. Our first stage is builder
and the final stage is prodBuild
. In our prodBuild
stage, you can see we copy the dist
folder from our builder
stage. This allows us to keep our final production image as slim as possible. There are no node_modules
in our final build, no source code. Only the final dist folder. This allows us to install a simple http-server
in our production image so we can run the command to serve the folder where our dist files are.
Even though we have added stages, we don't need to update our npm command. This is because if we don't specify a target (-t prodBuild
), it will run through the entire file by default.
Let's build our image with our npm run build:prod
command.
Once that completes, let's start a container with the following command:
docker run -it -p 8000:80 martindevnow/docker-demo:latest
We can now go to localhost:8000 to see your site running in production!
Note: we are not creating an npm command for this. We'll go more into production in the next part in this series.
One thing we haven't thought about yet is how our developers will run their unit tests.
Right now, they'd have to run a docker exec
command after the dev container is already running. Ok, no big deal, let's just add in a command for them to make it a little easier.
"start:unit": "docker exec -it mdn_${npm_package_name}_dev_container npm run test:unit",
Tag:
v0.3.2
Hopefully you're starting to see we have 2 classifications of npm commands. Those that are meant to be run outside of a container (on the host machine) and those meant to be run inside the container (i.e. using vue-cli-service
).
Now, as long as our developers are actively developing, they can run npm run start:unit
in a new terminal and it will run the test suite agains the current version of the app.
We now have a way to build our development images and run them locally while maintaining the conveniences of Hot Module Replacement to keep our development workflow efficient. We also have a command we can easily run to execute our unit tests. And finally, we've setup the core of our production image that will be deployed in the next part of this series to our server.
In the next part of this series, we'll cover CircleCI and Cypress to build our CICD pipeline that supports Docker. We'll also configure a Digital Ocean server to deploy our "production" image.