Skip to content

Instantly share code, notes, and snippets.

@francisco-rojas
Last active February 20, 2019 18:50
Show Gist options
  • Save francisco-rojas/3c9227abcc1cdc3f8de179ce532d927d to your computer and use it in GitHub Desktop.
Save francisco-rojas/3c9227abcc1cdc3f8de179ce532d927d to your computer and use it in GitHub Desktop.
The Docker Book

Chapter 1: Introduction

  • Containers instead run in user space on top of an operating system’s kernel. As a result, container virtualization is often called operating system-level virtualization.
  • Container technology allows multiple isolated user space instances to be run on a single host.
  • Containers can generally only run the same or a similar guest operating system as the underlying host.
  • Containers have also been seen as less secure than the full isolation of hypervisor virtualization. Countering this argument is that lightweight containers lack the larger attack surface of the full operating system needed by a virtual machine combined with the potential exposures of the hypervisor layer itself.
Docker Images

Images are the building blocks of the Docker world. You launch your containers from images. Images are the “build” part of Docker’s life cycle. You can consider images to be the “source code” for your containers.

Docker Registries

Docker stores the images you build in registries. There are two types of registries: public and private. Docker, Inc., operates the public registry for images, called the Docker Hub.

Docker Containers

Containers are launched from images and can contain one or more running processes. You can think about images as the building or packing aspect of Docker and the containers as the running or execution aspect of Docker.

Each container contains a software image – its ‘cargo’ – and, like its physical counterpart, allows a set of operations to be performed. For example, it can be created, started, stopped, restarted, and destroyed.

Compose, Swarm and Kubernetes

Docker Compose allows you to run stacks of containers to represent application stacks, for example web server, application server and database server containers running together to serve a specific application.

Docker Swarm allows you to create clusters of containers, called swarms, that allow you to run scalable workloads.

Chapter 3: Getting Started with Docker

  • Docker has a client-server architecture. It has two binaries, the Docker server provided via the dockerd binary and the docker binary, that acts as a client. As a client, the docker binary passes requests to the Docker daemon (e.g., asking it to return information about itself), and then processes those requests when they are returned.
  • The command provides docker run all of the “launch” capabilities for Docker.
List docker containers
docker ps     # lists running containers
docker ps -a  # lists all containers
ps -n x       # lists the last x containers, running or stopped.
The docker run command
docker run -i -t ubuntu /bin/bash
root@f7cbdac22a02:/#
  • The -i flag keeps STDIN open from the container, even if we’re not attached to it.
  • The -t flag tells Docker to assign a pseudo-tty to the container. This provides us with an interactive shell in the new container.
  • Next, we told Docker which image to use to create a container, in this case the ubuntu image. The ubuntu image is a stock image, also known as a “base” image, provided by Docker, Inc., on the Docker Hub registry. You can use base images as the basis for building your own images on the operating system of your choice.
  • Finally, we told Docker which command to run in our new container, in this case launching a Bash shell with the /bin/bash command.
  • The container only runs for as long as the command specified, /bin/bash, is running. Once you exit the container, that command ends, and the container is stopped. The container still exists; we can show a list of current containers using the docker ps -a command.
  • Note: there is also the docker create command which creates a container but does not run it.
  • For more options run: docker help run.
  • You can delete a container using the docker rm command.
$ sudo docker rm 80430f8d0921
80430f8d0921

# or if the container is running

$ sudo docker rm -f 80430f8d0921
80430f8d0921
Container naming
  • Docker will automatically generate a name at random for each container we create.
  • Container names are unique. You must delete a container using the docker rm command before reusing the name.
  • To add a custom name use the --name option:
$ sudo docker run --name bob_the_container -i -t ubuntu /bin/bash
root@aa3f365f0f4e:/# exit
Starting/Stopping containers
  • Start containers with:
$ sudo docker start bob_the_container
# or
$ sudo docker start aa3f365f0f4e
# or
$ sudo docker restart aa3f365f0f4e
  • Stop containers with:
# Sends SIGTERM
$ sudo docker stop daemon_dave
# or
$ sudo docker stop c2c4e57c12c4

# Sends SIGKILL
$ sudo docker kill daemon_dave
Attaching to a container
  • Containers will restart with the same options we’d specified when we launched it with the docker run command. So there is an interactive session waiting on our running container. We can reattach to that session using the docker attach command.
$ sudo docker attach bob_the_container
# or
$ sudo docker attach aa3f365f0f4e
Creating daemonized containers
$ sudo docker run --name daemon_dave -d ubuntu /bin/sh -c "while
true; do echo hello world; sleep 1; done"
1333bb1a66af402138485fe44a335b382c09a887aa9f95cb9725e309ce5b7db3
  • The -d flag to tells Docker to detach the container to the background.
  • To see the output being printed by the container we can use the docker logs command. The docker logs command fetches the logs of a container.
$ sudo docker logs daemon_dave
hello world
hello world
hello world
hello world
hello world
hello world
hello world
. . .
  • Docker will output the last few log entries and then return. We can also monitor the container’s logs much like the tail -f binary operates using the -f flag.
  • You can get the last ten lines of a log by using docker logs --tail 10 daemon_dave.
  • To make debugging a little easier, we can also add the -t flag to prefix our log entries with timestamps.
$ sudo docker logs -ft daemon_dave
2016-08-02T03:31:16.743679596Z hello world
2016-08-02T03:31:17.744769494Z hello world
2016-08-02T03:31:18.745786252Z hello world
2016-08-02T03:31:19.746839926Z hello world
. . .
  • syslog disables the docker logs command and redirects all container log output to Syslog.
  • none disables all logging in containers and results in the docker logs command being disabled.
Inspect container processes
  • We can inspect the processes running inside the container with:
$ sudo docker top daemon_dave
PID USER COMMAND
977 root /bin/sh -c while true; do echo hello world; sleep 1;done
1123 root sleep 1
  • docker stats shows statistics for one or more running Docker containers
$ sudo docker stats daemon_dave daemon_dwayne
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O
daemon_dave 0.14% 212 KiB/994 MiB 0.02% 5.062 KiB/648 B 1.69 MB / 0 B
daemon_dwayne 0.11% 216 KiB/994 MiB 0.02% 1.402 KiB/648 B 24.43 MB / 0 B
  • The docker inspect command will interrogate our container and return its configuration information, including names, commands, networking configuration, and a wide variety of other useful data.
$ sudo docker inspect daemon_alice
[{
"ID": "
c2c4e57c12c4c142271c031333823af95d64b20b5d607970c334784430bcbd0f
",
"Created": "2014-05-10T11:49:01.902029966Z",
"Path": "/bin/sh",
"Args": [
"-c",
"while true; do echo hello world; sleep 1; done"
],
"Config": {
"Hostname": "c2c4e57c12c4",
. . .
  • We can also selectively query the inspect results hash using the -f or --format flag.
$ sudo docker inspect --format='{{ .State.Running }}' daemon_alice
true

# or

$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}'
daemon_alice
172.17.0.2
Running a process inside an already running container
  • Running a background task inside a container
$ sudo docker exec -d daemon_dave touch /etc/new_config_file
  • We can also run interactive tasks like opening a shell inside our daemon_dave container.
sudo docker exec -t -i daemon_dave /bin/bash
Automatic container restarts
  • The --restart flag checks for the container’s exit code and makes a decision whether or not to restart it.
$ sudo docker run --restart=always --name daemon_alice -d ubuntu
/bin/sh -c "while true; do echo hello world; sleep 1; done"

# or

$ sudo docker run --restart=on-failure:5 --name daemon_alice -d ubuntu
/bin/sh -c "while true; do echo hello world; sleep 1; done"
  • on-failure which restarts the container if it exits with a nonzero exit code. The on-failure flag also accepts an optional restart count.

Chapter4: Working with Docker images and repositories

Listing Docker images
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu latest c4ff7513909d 6 days ago 225.4 MB

# or to list only fedora images

$ sudo docker images fedora
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora 21 7d3f07f8de5f 6 weeks ago 374.1 MB
  • Local images live on our local Docker host in the /var/lib/docker directory.
  • All your containers in the /var/lib/docker/containers directory.
$ sudo docker pull ubuntu:16.04
16.04: Pulling from library/ubuntu
Digest: sha256:
c6674c44c6439673bf56536c1a15916639c47ea04c3d6296c5df938add67b54b
Status: Downloaded newer image for ubuntu:16.04
  • docker pull command pulls down the Ubuntu 16.04 image from the ubuntu repository.
  • Each tag marks together a series of image layers that represent a specific image (e.g., the 16.04 tag collects together all the layers of the Ubuntu 16.04 image).
  • You can refer to a specific image inside a repository by suffixing the repository name with a colon and a tag name.
  • docker run command, if the image isn’t present locally already then Docker will download it from the Docker Hub. By default, if you don’t specify a specific tag, Docker will download the latest tag.
$ sudo docker run -t -i --name new_container ubuntu:16.04 /bin/
bash
root@79e36bff89b4:/#
Searching for images
  • We can also search all of the publicly available images on Docker Hub using the docker search commande or accessing the DockerHub website:
$ sudo docker search puppet
NAME DESCRIPTION STARS OFFICIAL
AUTOMATED
macadmins/puppetmaster Simple puppetmaster 21 [
OK]
devopsil/puppet Dockerfile for a 18 [
OK]
. . .
Building our own images
  • Login into your docker hub account:
$ docker login
Username: franciscorojas
Password: 
Login Succeeded
  • You can use the docker logout command to log out from a registry server.
Using Docker commit to create images
  • You can think about this method as much like making a commit in a version control system. We create a container, make changes to that container as you would change code, and then commit those changes to a new image.
  • The docker commit command only commits the differences between the image the container was created from and the current state of the container. This means updates are lightweight.
# create container
$ sudo docker run -i -t ubuntu /bin/bash

# make changes in the container
root@4aab3ce3cb76:/# apt-get -yqq update
...
root@4aab3ce3cb76:/# apt-get -y install apache2
...
# exit the container and commit the change
exit
$ sudo docker commit 4aab3ce3cb76 franciscorojas/apache2
...

# or 
$ sudo docker commit -m "A new custom image" -a "Francisco Rojas"
\ 
4aab3ce3cb76 franciscorojas/apache2:webserver
...

# to run a container from the image created
$ sudo docker run -t -i franciscorojas/apache2:webserver /bin/bash
root@9c2d3a843b9e:/# service apache2 status
* apache2 is not running
Building images with a Dockerfile
# Version: 0.0.1
FROM ubuntu:18.04
LABEL maintainer="josefcorojas@gmail.com"
RUN apt-get update; apt-get install -y nginx
RUN echo 'Hi, I am in your container' \
    >/var/www/html/index.html
EXPOSE 80
$ sudo docker build -t="franciscorojas/static_web" .
# or
$ sudo docker build -t="franciscorojas/static_web:v1" .
# or
$ sudo docker build -t="franciscorojas/static_web:v1" \
github.com/francisco-rojas/docker-static_web
# or 
$ sudo docker build -t="franciscorojas/static_web:v1" -f /path/to/dockerfile. .
# or skip cache to build the image from zero
$ sudo docker build --no-cache -t="franciscorojas/static_web" .
  • If an instruction fails during the build process you can debug the issue by using the docker run command to create a container from the last step taht succeded using the image ID of the last successful instruction.
  • Adding a package repository or updating packages near the top of the file to ensure the cache is hit.
$ cd static_web
$ sudo docker build -t="jamtur01/static_web" .
Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 1 : FROM ubuntu:18.04
---> 8dbd9e392a96
Step 2 : LABEL maintainer="james@example.com"
---> Running in d97e0c1cf6ea
---> 85130977028d
Step 3 : RUN apt-get update
---> Running in 85130977028d
---> 997485f46ec4 							# last successful instruction
Step 4 : RUN apt-get install -y ngin
---> Running in ffca16d58fd8
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package ngin
2014/06/04 18:41:11 The command [/bin/sh -c apt-get install -y
ngin] returned a non-zero code: 100
# connect to the container and run the commands that failed to debug the issue
$ sudo docker run -t -i 997485f46ec4 /bin/bash
dcge12e59fe8:/#
Using the build cache for templating
FROM ubuntu:18.04
LABEL maintainer="francisco@example.com"
ENV REFRESHED_AT 2016-07-01
RUN apt-get -qq update
  • Docker then resets the cache when it hits the modified ENV instruction and runs every subsequent instruction anew without relying on the cache.
# to list the new image
$ sudo docker images franciscorojas/static_web
REPOSITORY TAG ID CREATED SIZE
franciscorojas/static_web latest 22d47c8cb6e5 24 seconds ago 12.29 kB
(virtual 326 MB)

# to check how an image was built
$ sudo docker history 22d47c8cb6e5
IMAGE CREATED CREATED BY
SIZE
22d47c8cb6e5 6 minutes ago /bin/sh -c #(nop) EXPOSE map[80/tcp
:{}] 0 B
b584f4ac1def 6 minutes ago /bin/sh -c echo 'Hi, I am in your
container' 27 B
93fb180f3bc9 6 minutes ago /bin/sh -c apt-get install -y nginx
18.46 MB
9d938b9e0090 6 minutes ago /bin/sh -c apt-get update
20.02 MB
4c66c9dcee35 6 minutes ago /bin/sh -c #(nop) MAINTAINER James
Turnbull " 0 B
. . .
Launching a container from our new image
# By default docker bind a random port on the local host to map it to the specified port # in the container
$ sudo docker run -d -p 80 --name static_web franciscorojas/static_web nginx -g "daemon off;"
6751b94bb5c001a650c918e9a7f9683985c3eb2b026c2f1776e61190669494a8
  • The -p flag manages which network ports Docker publishes at runtime.
  • When you run a container, Docker has two methods of assigning ports on the Docker host:
    • Docker can randomly assign a high port from the range 32768 to 61000 on the Docker host that maps to port 80 on the container.
    • You can specify a specific port on the Docker host that maps to port 80 on the container
  • Look at what port has been assigned using the docker ps command or the docker port command.
$ sudo docker ps -l
CONTAINER ID IMAGE ... PORTS
NAMES
6751b94bb5c0 franciscorojas/static_web:latest ... 0.0.0.0:49154->80/
tcp static_web

$ sudo docker port 6751b94bb5c0 80
0.0.0.0:49154
  • Other options for the -p option are:
# binds port 80 on the container to port 8080 on the local host
$ sudo docker run -d -p 8080:80 --name static_web_80 franciscorojas/static_web nginx -g "daemon off;"

# binds port 80 of the container to port 80 on the 127.0.0.1 interface on the local host.
$ sudo docker run -d -p 127.0.0.1:80:80 --name static_web_lb franciscorojas/static_web nginx -g "daemon off;"

# binds a random port on 127.0.0.1 on the host to port 80 on the container
$ sudo docker run -d -p 127.0.0.1::80 --name static_web_random franciscorojas/static_web nginx -g "daemon off;"
  • Docker also has a shortcut, -P, that allows us to publish all ports we’ve exposed via EXPOSE instructions in our Dockerfile.
$ sudo docker run -d -P --name static_web franciscorojas/static_web nginx -g "daemon off;"

This would publish port 80 on a random port on our local host.

Dockerfile instructions

CMD

The CMD instruction specifies the command to run when a container is launched. It is similar to the RUN instruction, but rather than running the command when the container is being built, it will specify the command to run when the container is launched, much like specifying a command to run when launching a container.

# using command
docker run -i -t jamtur01/static_web /bin/true

# using Dockerfile
CMD ["/bin/bash", "-l"]

# Running the container: notice that the command to execute is not specified here because it will take it from the Dockerfile
docker run -t -i franciscorojas/test 

NOTES:

  • It is recommended that you always use the array syntax.
  • you can override the CMD instruction by passing the command to execute when using docker run like: docker run -i -t franciscorojas/test /bin/ps
  • You can only specify one CMD instruction in a Dockerfile. If more than one is specified, then the last CMD instruction will be used.

ENTRYPOINT

The ENTRYPOINT instruction provides a command that isn’t as easily overridden. Instead, any arguments we specify on the docker run command line will be passed as arguments to the command specified in the ENTRYPOINT.

# Dockerfile
ENTRYPOINT ["/usr/sbin/nginx"]

# Running the container: notice that the command is not passed but only the options that will be used for the ENTRYPOINT command
docker run -t -i franciscorojas/static_web -g "daemon off;"

# Running the container: the previous would be the equivalent to having this in the Docker file and running without any commands/options
ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]

NOTE: you can override the ENTRYPOINT instruction using the docker run command with --entrypoint flag.

ENTRYPOING + CMD

This allows us to build in a default command to execute when our container is run combined with overridable options and flags on the docker run command line.

# Dockerfile
ENTRYPOINT ["/usr/sbin/nginx"]
CMD ["-h"]

This will ensure that the nginx help is displayed if the container is run without any options.

WORKDIR

The WORKDIR instruction provides a way for commands to be executed in a specific folder. We can use it to set the working directory for a series of instructions or for the final container.

# Dockerfile
WORKDIR /opt/webapp/db
RUN bundle install # this runs bundle install within /opt/webapp/db
WORKDIR /opt/webapp
ENTRYPOINT [ "rackup" ] # this command will run in /opt/webapp as wil all remaining commands

NOTE: You can override the working directory at runtime with the -w flag.

ENV

The ENV instruction is used to set environment variables during the image build process.

ENV RVM_PATH /home/rvm/

This new environment variable will be used for any subsequent RUN instructions, as if we had specified an environment variable prefix to a command like so:

RUN gem install unicorn

which will be the same as:

RVM_PATH=/home/rvm/ gem install unicorn

We can also use these environment variables in other instructions

ENV TARGET_DIR /opt/app
WORKDIR $TARGET_DIR

These environment variables will also be persisted into any containers created from your image.

NOTE: You can also pass environment variables on the docker run command line using the -e flag.

USER

The USER instruction specifies a user that the image should be run as:

USER nginx

This will cause containers created from the image to be run by the nginx user. Notes:

  • You can also override this at runtime by specifying the -u flag with the docker run command.
  • The default user if you don’t specify the USER instruction is root.

VOLUME

The VOLUME instruction adds volumes to any container created from the image.

  • Volumes can be shared and reused between containers.
  • A container doesn’t have to be running to share its volumes.
  • Changes to a volume are made directly.
  • Changes to a volume will not be included when you update an image.
  • Volumes persist even if no containers use them.

This allows us to add data (like source code), a database, or other content into an image without committing it to the image and allows us to share that data between containers.

VOLUME ["/opt/project"]

# or to specify multiple volumes
VOLUME ["/opt/project", "/data" ]

Note: Also useful and related is the docker cp command. This allows you to copy files to and from your containers.

ADD

The ADD instruction adds files and directories from our build environment into our image.

ADD software.lic /opt/application/software.lic

This ADD instruction will copy the file software.lic from the build directory to /opt/application/software.lic in the image. The source of the file can be a URL, filename, or directory as long as it is inside the build context or environment.

NOTES:

  • When ADD’ing files Docker uses the ending character of the destination to determine what the source is. If the destination ends in a /, then it considers the source a directory. If it doesn’t end in a /, it considers the source a file.
  • Docker will automatically unpack tar, gzip, bzip2, xz) archives
  • If a file or directory with the same name already exists in the destination, it will not be overwritten.
  • Finally, if the destination doesn’t exist, Docker will create the full path.
  • New files and directories will be created with a mode of 0755 and a UID and GID of 0.
  • The build cache can be invalidated by ADD instructions.

COPY

The COPY instruction is closely related to the ADD instruction. The key difference is that the COPY instruction is purely focused on copying local files from the build context and does not have any extraction or decompression capabilities.

COPY conf.d/ /etc/apache2/

NOTE: The source of the files must be the path to a file or directory relative to the build context, the local source directory in which your Dockerfile resides. You cannot copy anything that is outside of this directory, because the build context is uploaded to the Docker daemon, and the copy takes place there. Anything outside of the build context is not available. The destination should be an absolute path inside the container.

LABEL

The LABEL instruction adds metadata to a Docker image. The metadata is in the form of key/value pairs.

LABEL version="1.0"
LABEL location="New York" type="Data Center" role="Web Server"

NOTE: You can inspect the labels on an image using the docker inspect command.

STOPSIGNAL

The STOPSIGNAL instruction sets the system call signal that will be sent to the container when you tell it to stop. This signal can be a valid number from the kernel syscall table, for instance 9, or a signal name in the format SIGNAME, for instance SIGKILL.

ARG

The ARG instruction defines variables that can be passed at build-time via the docker build command. This is done using the --build-arg flag. You can only specify build-time arguments that have been defined in the Dockerfile.

ARG build
ARG webapp_user=user

The second ARG instruction sets a default, if no value is specified for the argument at build-time then the default is used.

docker build --build-arg build=1234 -t franciscorojas/webapp .

NOTE: DON'T PASS ANY CREDENTIALS THIS WAY!!! Your credentials will be exposed during the build process and in the build history of the image.

SHELL

The SHELL instruction allows the default shell used for the shell form of commands to be overridden.

HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working correctly. We can see the state of the health check using the docker inspect command. docker inspect --format '{{.State.Health.Status}}'

ONBUILD

The ONBUILD instruction adds triggers to images. A trigger is executed when the image is used as the basis of another image. The trigger inserts a new instruction in the build process, as if it were specified right after the FROM instruction. The trigger can be any build instruction.

ONBUILD ADD . /app/src
ONBUILD RUN cd /app/src; make

This would add an ONBUILD trigger to the image being created, which we see when we run docker inspect on the image.

FROM ubuntu:18.04
LABEL maintainer="james@example.com"
RUN apt-get update; apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ONBUILD ADD . /var/www/
EXPOSE 80
ENTRYPOINT ["/usr/sbin/apachectl"]
CMD ["-D", "FOREGROUND"]

We now have an image with an ONBUILD instruction that uses the ADD instruction to add the contents of the directory we’re building from to the /var/www/ directory in our image. This could readily be our generic web application template from which I build web applications.

An initial Dockerfile for the Sample website
mkdir sample
cd sample

# Dockerfile
FROM ubuntu:18.04
LABEL maintainer="josefcorojas@gmail.com"
ENV REFRESHED_AT 2019-02-20
RUN apt-get -yqq update; apt-get -yqq install nginx   # Installs Nginx.
RUN mkdir -p /var/www/html/website                    # Creates a directory in the container
ADD global.conf /etc/nginx/conf.d/                    # Adds local file to the image
ADD nginx.conf /etc/nginx/nginx.conf                  # Adds local file to the image
EXPOSE 80                                             # Exposes port 80 on the image

# Build the image
docker build -t="franciscorojas/nginx" .

# Show history of the image build process
docker history franciscorojas/nginx

Store the application code within the website directory:

mkdir website; cd website
wget https://raw.githubusercontent.com/jamtur01/dockerbook-code/master/code/5/sample/website/index.html
cd ..

Building containers from our Sample website and Nginx image

# run container
docker run -d -p 80 --name website -v $PWD/website:/var/www/html/website franciscorojas/nginx nginx

We can use the docker ps command to check what port in the host machine maps to port 80 in the container,

Notes:

  • You can see we’ve passed the nginx command to docker run. Normally this wouldn’t make Nginx run interactively. In the configuration we supplied to Docker, though, we’ve added the directive daemon off. This directive causes Nginx to run interactively in the foreground when launched.
  • The -v option allows us to create a volume in our container from a directory on the host. Volumes are specially designated directories within one or more containers that bypass the layered Union File System to provide persistent or shared data for Docker. This means that changes to a volume are made directly and bypass the image. They will not be included when we commit or build an image. Volumes can also be shared between containers and can persist even when containers are stopped.
  • We can also specify the read/write status of the container directory by adding either rw or ro after that directory:
docker run -d -p 80 --name website -v $PWD/website:/var/www/html/website:ro franciscorojas/nginx nginx

This would make the container directory /var/www/html/website read-only.

Using Docker to build and test a web application

Building our Sinatra application

mkdir -p sinatra
cd sinatra

# Dockerfile
FROM ubuntu:18.04
LABEL maintainer="josefcorojas@gmail.com"
ENV REFRESHED_AT 2019-02-20
RUN apt-get update -yqq; apt-get -yqq install ruby ruby-dev build-essential redis-tools # install ruby and redis
RUN gem install --no-rdoc --no-ri sinatra json redis                                    # install sinatra, json and redis gems
RUN mkdir -p /opt/webapp                                                                # create folder in container
EXPOSE 4567                                                                             # expose WEBrick port  
CMD [ "/opt/webapp/bin/webapp" ]                                                        # binary to launch app

# build the image from docker file
docker build -t="franciscorojas/sinatra" .

Copy the application code into the sinatray/webapp folder. We also need to ensure that the webapp/bin/webapp binary is executable prior to using it using the chmod command: chmod +x webapp/bin/webapp

Launch the container:

docker run -d -p 4567 --name webapp -v $PWD/webapp:/opt/webapp franciscorojas/sinatra

Notice we’ve not provided a command to run on the command line; instead, we’re using the command we specified via the CMD instruction in the Dockerfile of the image.

We can check the logs: docker logs -f webapp We can check the processes in the container: docker top webapp We can check the port mapping: docker port webapp 4567

Extending our Sinatra application to use Redis

Copy your app code to webapp_redis Make the binary executable: chmod +x webapp_redis/bin/webapp

Building a Redis database image

mkdir redis
cd redis

# Dockerfile
FROM ubuntu:18.04
LABEL maintainer="josefcorojas@gmail.com"
ENV REFRESHED_AT 2019-02-20
RUN apt-get -yqq update; apt-get -yqq install redis-server redis-tools  # install redis
EXPOSE 6379                                                             # expose redis' port
ENTRYPOINT ["/usr/bin/redis-server" ]                                   # start redis
CMD ["--protected-mode no"]                                             # use this option to avoid redis from entering protected mode

# Build the image
docker build -t franciscorojas/redis .

# Run container from image
docker run -d -p 6379 --name redis franciscorojas/redis

# check port mapping
docker port redis 6379
#=> 0.0.0.0:49161

Install the redis-tools packabe on the ubuntu image: apt-get -y install redis-tools

$ redis-cli -h 127.0.0.1 -p 49161
redis 127.0.0.1:49161>

Connecting our Sinatra application to the Redis container There are two ways we could do this using:

  • [NOT RECOMMENDED] Docker’s own internal network.
  • [RECOMMENDED] From Docker 1.9 and later, using Docker Networking and the docker network command.

Docker networking To use Docker networks we first need to create a bridged network called app and then launch a container inside that network.

docker network create app
ec8bc3a70094a1ac3179b232bc185fcda120dad85dec394e6b5b01f7006476d4

We can then inspect this network using the docker network inspect command.

docker network inspect app
[{
	"Name": "app",
	"Id": "ec8bc...",
	"Scope": "local",
	"Driver": "bridge",
	"IPAM": {
		"Driver": "default",
		"Config": [{..}]
	},
	"Containers": {},
	"Options": {}
}]

In addition to bridge networks, which exist on a single host, we can also create overlay networks, which allow us to span multiple hosts.

You can list all current networks using the docker network ls command.

docker network ls
NETWORK ID NAME DRIVER
a74047bace7e bridge bridge
ec8bc3a70094 app bridge
8f0d4282ca79 none null
7c8cd5d23ad5 host host

You can remove a network using the docker network rm command.

To add some containers to our network, starting with a Redis container.

docker run -d --net=app --name db franciscorojas/redis

Here we’ve run a new container called db using our franciscorojas/redis image. We’ve also specified a new flag: --net. The --net flag specifies a network to run our container inside.

A Docker network will also add the app network as a domain suffix for the network, any host in the app network can be resolved by hostname.app, here db.app: ping db.app is the same as: ping 172.18.0.2

We could now start our application and have our Sinatra application write its variables into Redis via the connection between the db and webapp containers that we’ve established via the app network.

redis = Redis.new(:host => 'db', :port => '6379')

Launch the app to confirm it connects to redis:

docker run -d -p 4567 --net=app --name webapp_redis -v $PWD/webapp_redis:/opt/webapp franciscorojas/sinatra

You can disconnect containers from a network: docker network disconnect app db

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment