Skip to content

Instantly share code, notes, and snippets.

Last active September 13, 2021 21:12
Show Gist options
  • Save npodonnell/0c65290e126b806b8543938986df6a81 to your computer and use it in GitHub Desktop.
Save npodonnell/0c65290e126b806b8543938986df6a81 to your computer and use it in GitHub Desktop.
Docker Cheatsheet

Docker Cheatsheet

N. P. O'Donnell, 2020

Files & Directories

  • Containers are stored in /var/lib/docker/containers
  • Easiest to use su first if snooping around /var/lib/docker
  • Each container has a config.v2.json config file

Working with images

Build an image from a Dockerfile

docker build [-t <tag>] <Dockerfile location>


docker build -t my-webapp:dev .

Dockerfile format

  • Starts with FROM <baseimage:tag> eg. FROM fedora:21
  • FROM scratch means start with no base image
  • Comments begin with #
  • WORKDIR crates and cd's into a dir. Subsequent commands are performed in this dir. Owned by root.
  • RUN runs command during build. eg. RUN apt-get update
  • To add files/dirs use COPY rather than ADD unless you really need ADD
  • CMD means execute a command. eg. CMD ["/bin/bash"]

Once the image is build, a container can be created based on the image. Think of an image as a program or blueprint and container as process or instance. Containers have state, images do not.

Creating Containers

Create a container

docker create my-webapp

Create a container (specific version)

docker create my-webapp:dev

Create interactive Ubuntu container (stays running)

docker create -it ubuntu # -i = interactive, -t = allocate a pseudo TTY

Starting/Stopping Containers

Start a container by name

docker start thirsty_pare

Start a container by ID

docker start 701d51


List avaialble images

docker images

List all containers (running and stopped)

docker ps -a

Attach to a container

docker attach <name or ID>

List table of processes running in a container

docker top <name or ID>

Get stats on memory/CPU/IO etc.

docker stats <name or ID>

Getting a shell in a running container

docker exec -it <name or ID> /bin/sh

List volumes

docker inspect -f '{{ .Mounts }}' <name or ID>

Delteing Things

Delete an image

docker rmi <name:[version]>

Delete a container

docker rm <name or ID>

Delete everything

docker system prune -a

Exec vs Shell forms of CMD

There are two ways to speficy your CMD in a Dockerfile:

  1. Shell -- eg. CMD node index.js
  2. Exec -- eg. CMD ["node", "index.js"]

Using the shell form will cause the image's /bin/sh to be invoked, with the CMD as arguments. This means crucually that PID 1 will be the shell itself, and it will not forward signals.

The exec form consists of a JSON string with the command and arguments, recommended because it means one less process is running and more importantly, signals are forwarded.

Init Process

Containers can be given an init process (PID 1) if the --init option is passed to either docker create or docker run. This causes the init program to run as PID 1. The program specified in the CMD directive, or custom command, runs as a child of the init process.

The init process run by docker to achieve this is tini. (tini github)

Tini mainly solves 2 problems:

  • Reaping zombies in the container - without tini, zombies from badly-written apps will remain in the process table, causing a resource leak. tini regularly clears zombies from the process table.

  • Signal handling - without tini, if a signal such as SIGINT is sent to the app, the app will ignore the signal unless it has signal handling code. This is due to the app having the special "1" process ID which is treated differently by the kernel. With tini, the app gets a PID > 1 and tini forwards signals it receives to the app, and the app behaves nomally when it receives a signal - for example it will exit if it receives SIGINT. This means Ctrl-C will "just work".

Adding tini

Tini can be added by either:

  1. Adding --init to docker build or docker run
  2. Set ENTRYPOINT to ["/sbin/tini", "--"] in dockerfile

More discussion on tini here


Port Forwarding

To expose a port listenting inside the container to the outside world, add -p <expose port>:<container port> to docker create or docker run. For example to expose port 3000 in the container on port 80 of the host:

docker ... -p 80:3000

Multi-Stage Builds

Stages allow multiple bases to be used in the same dockerfile. One problem this solves is reducing the size of the image by removing build/test tools that are no longer needed once the image is built. Multi-stage builds allow the app to be built against a base with all the dev tooling (for example the golang compiler for a go project), then once built, the image is re-based with a bare-bones OS (such as alpine) and the artifacts from the previous build are copied. This also improves security by reducing attack surface.

Multi-stage Dockerfiles can also be used to make development and test builds which are based off of the production build.

Good discussion on multi-stage builds here.

Naming Stages

The AS keyword can be used to name a stage in a multi-stage dockerfile, then the stage can be referenced as the base for another stage. This is useful for creating development or test builds which are based on the production build.


FROM alpine AS prod
CMD ["ls", "/"]
FROM prod AS dev
CMD ["ls", "-la", "/"]

In this example there are two stages: prod and dev. The commands to build the prod and dev stages respectively are:

docker build -t stages:prod --target prod .


docker build -t stages:dev --target dev .

To run them:

$ docker run -it stages:prod
bin    etc    lib    mnt    proc   run    srv    tmp    var
dev    home   media  opt    root   sbin   sys    usr


$ docker run -it stages:dev
total 64
drwxr-xr-x    1 root     root          4096 Jul 20 21:48 .
drwxr-xr-x    1 root     root          4096 Jul 20 21:48 ..
-rwxr-xr-x    1 root     root             0 Jul 20 21:48 .dockerenv
drwxr-xr-x    2 root     root          4096 May 29 14:20 bin

Copying Artifacts from the Previous Stage

If the project uses a compiled language such as C++, Rust, or Go, there is usually no need to have the compiler and build tools present at run time. A common pattern is to build the binaries in a build stage, replace the entire build OS with a run OS, then copy only the binaries and artifacts from the previous (build) stage, leaving behind the compiler and build tools.

The binaries, including any dynamic-linked libraries, can be copied using a COPY directive with a --from=<stage> parameter such as:

COPY --from=0 /go/src/ .

See here for more details on the COPY directive.


Docker swarm can be used to control multiple docker hosts from one "manager" host.

First a manager host must be initalized. This will cause the docker host the command is run on to become a manager of that swarm:

docker swarm init

This command will print a docker swarm join command incuding a token which can be run on worker nodes to join the swarm. The join command has the format:

docker swarm join --token <join token>

This command may be seen at any time on the manager host by running:

docker swarm join-token worker

Once the join command has been run on a worker node, the following command (run on the manager) will print the status of all nodes on the swarm:

docker node ls
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment