This workshop markdown file is based on https://docs.runme.dev/. So if it does not appear as a workflow in your editor, kindly install VSCode and setup runme extension as described here: https://docs.runme.dev/getting-started/vscode. Then it should appear similar to a Jupyter Notebook you would see in a Python tutorial.
In this workshop, we will:
- Build and run an image as a container.
- Deploy an application using multiple containers with a database.
- Demonstrate how docker-compose simplifies running multiple containers.
docker --version
Clone getting-started-app repository
git clone https://github.com/docker/getting-started-app.git
Copy the following content to a file named Dockerfile
in getting-started-app
directory:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
The FROM instruction initializes a new build stage and sets the base image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction. The image can be any valid image.
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn't exist, it will be created even if it's not used in any subsequent Dockerfile instruction.
The COPY instruction copies new files or directories from and adds them to the filesystem of the image at the path .
The RUN instruction will execute any commands to create a new layer on top of the current image. The added layer is used in the next step in the Dockerfile.
The CMD instruction sets the command to be executed when running a container from an image.
The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. The EXPOSE
instruction doesn't actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To publish the port when running the container, use the -p flag on docker run to publish and map one or more ports
cd getting-started-app && docker build -t getting-started .
The docker build command uses the Dockerfile to build a new image. You might have noticed that Docker downloaded a lot of "layers". This is because you instructed the builder that you wanted to start from the node:18-alpine image. But, since you didn't have that on your machine, Docker needed to download the image.
After Docker downloaded the image, the instructions from the Dockerfile copied in your application and used yarn to install your application's dependencies. The CMD directive specifies the default command to run when starting a container from this image.
Finally, the -t flag tags your image. Think of this as a human-readable name for the final image. Since you named the image getting-started, you can refer to that image when you run a container.
The . at the end of the docker build command tells Docker that it should look for the Dockerfile in the current directory.
Run your container using the docker run command and specify the name of the image you just created:
docker run -d -p 127.0.0.1:3000:3000 getting-started
The -d flag (short for --detach) runs the container in the background. This means that Docker starts your container and returns you to the terminal prompt. You can verify that a container is running by viewing it in Docker Dashboard under Containers, or by running docker ps in the terminal.
The -p flag (short for --publish) creates a port mapping between the host and the container. The -p flag takes a string value in the format of HOST:CONTAINER, where HOST is the address on the host, and CONTAINER is the port on the container. The command publishes the container's port 3000 to 127.0.0.1:3000 (localhost:3000) on the host. Without the port mapping, you wouldn't be able to access the application from the host.
After a few seconds, open your web browser to http://localhost:3000. You should see your app.
Add an item or two and see that it works as you expect. You can mark items as complete and remove them. Your frontend is successfully storing items in the backend.
The following commands uses a volume which mounts the host system directory in the container. So you run your node process inside the container but can now edit the code on host system and see the changes reflect in realtime without needing to build a new image.
cd getting-started-app && docker run -d --mount type=bind,src="$(pwd)",target=/app -p 127.0.0.1:3000:3000 getting-started \
sh -c "yarn install && yarn run dev"
Up to this point, you've been working with single container apps. But, now you will add MySQL to the application stack. The following question often arises - "Where will MySQL run? Install it in the same container or run it separately?" In general, each container should do one thing and do it well. The following are a few reasons to run the container separately:
- There's a good chance you'd have to scale APIs and front-ends differently than databases.
- Separate containers let you version and update versions in isolation.
- While you may use a container for the database locally, you may want to use a managed service for the database in production. You don't want to ship your database engine with your app then.
- Running multiple processes will require a process manager (the container only starts one process), which adds complexity to container startup/shutdown.
Remember that containers, by default, run in isolation and don't know anything about other processes or containers on the same machine. So, how do you allow one container to talk to another? The answer is networking. If you place the two containers on the same network, they can talk to each other.
There are two ways to put a container on a network:
- Assign the network when starting the container.
- Connect an already running container to a network.
In the following steps, you'll create the network first and then attach the MySQL container at startup.
Create the network.
docker network create todo-app
Start a MySQL container and attach it to the network. You're also going to define a few environment variables that the database will use to initialize the database.
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
You'll notice a volume named todo-mysql-data in the above command that is mounted at /var/lib/mysql, which is where MySQL stores its data. However, you never ran a docker volume create command. Docker recognizes you want to use a named volume and creates one automatically for you.
The todo app supports the setting of a few environment variables to specify MySQL connection settings. They are:
- MYSQL_HOST - the hostname for the running MySQL server
- MYSQL_USER - the username to use for the connection
- MYSQL_PASSWORD - the password to use for the connection
- MYSQL_DB - the database to use once connected
Specify each of the previous environment variables, as well as connect the container to your app network. Make sure that you are in the getting-started-app directory when you run this command.
cd getting-started-app && docker run -dp 127.0.0.1:3000:3000 \
-w /app -v "$(pwd):/app" \
--network todo-app \
-e MYSQL_HOST=mysql \
-e MYSQL_USER=root \
-e MYSQL_PASSWORD=secret \
-e MYSQL_DB=todos \
node:18-alpine \
sh -c "yarn install && yarn run dev"
Now if you run docker exec -it <mysql-container-id> mysql -p todos
with password secret
from above, you can list items by running select * from todo_items;
query.
docker exec
executes some command on a running container. -it
flags attach to the running container after the command is executed. In this case, after we run mysql
command, the runner should fall into MySQL console.
docker-compose lets you put all your configuration and parameters for multiple containers in one place and run it all with a single command.
Put the following content in a file named compose.yaml
in getting-started-app
directory:
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
Run the following command:
cd getting-started-app && docker compose up -d
We have covered a lot of ground in this workshop but there is plenty more to learn about Docker containers that we could not fit into this workshop's time. Here are somethings you may want to get yourself familiar with for doing good with the projects:
- Best practices on building Docker images: https://docs.docker.com/build/building/best-practices/
- Differences between Volumes and Bind Mounts: https://docs.docker.com/engine/storage/volumes/
- Differences between
ARG
andENV
inDockerfile
: https://docs.docker.com/reference/dockerfile/ - Differences between
CMD
andENTRYPOINT
: https://docs.docker.com/reference/dockerfile/ - Multi Stage Docker Builds: https://docs.docker.com/build/building/multi-stage/
- What are container registries (Docker Hub is one): https://docs.docker.com/registry/
- Explore
docker exec
command and attach to a running container: https://docs.docker.com/reference/cli/docker/container/exec/ - Try running
docker logs
command and check the logs of a running container: https://docs.docker.com/reference/cli/docker/container/logs/
The above examples and explanations are mostly taken from various sections in getting started with Docker guide at https://docs.docker.com/get-started/workshop/.