Skip to content

Instantly share code, notes, and snippets.

@MohamedGouaouri
Last active March 14, 2024 11:28
Show Gist options
  • Save MohamedGouaouri/35ab473510ea56e54d11c7710f3f2d78 to your computer and use it in GitHub Desktop.
Save MohamedGouaouri/35ab473510ea56e54d11c7710f3f2d78 to your computer and use it in GitHub Desktop.
Docker hands-on

Docker Hands-on

Inspired by Tutorial

Part 1. Containers

  1. Check that docker is correctly running and that you have permission to use the engine

    docker info
    
  2. (pull) Pull an image from the official registry, eg: ubuntu:latest (you can browse https://store.docker.com if you want to find other images).

    docker pull ubuntu:latest
    

    ubuntu is the repository name, and :latest is a tag that identifies an image in the repository.

    You can check that your image is present in the docker engine:

    docker images
    
  3. (run) Run a container from this image.

    docker run ubuntu:latest
    

    (you may also write docker run ubuntu which is equivalent: :latest is the defaut tag if none are provided)

    Nothing happens? actually the container has already terminated, you can display it with docker ps, but add -a/--all because non-running container are not displayed by default.

    docker ps -a
    

    The default command of the ubuntu image is /bin/bash and by default docker containers are run without stdin (it is redirected from /dev/null). Thus bash exits immediately.

  4. (run a command) You may override the default command by providing extra arguments after the image name. Then this command will be executed (instead of bash).

    docker run ubuntu ls /bin
    docker run ubuntu cat /etc/motd
    
  5. (stdin) Let's go back to bash, this time we want interact with the shell. To keep stdin open, we launch the container with -i/--interactive.

    docker run -i ubuntu
    

    The container runs, but displays nothing. Actually bash is running in batch mode. You can try to execute commands (eg: ls, id, hostname...) and you will see the result.

    Bash is in batch mode because it is not running on a terminal (its stdout is a pipe, not a tty).

  6. (tty) To have a real interactive shell inside our container, we need to allocate a tty with -t/--tty

    docker run -t -i ubuntu
    
  7. (start) You can exit your container and display the list of all containers:

    docker ps -a
    

    It is possible to start them again with docker start. Like the run command, you may use -i to have stdin open. Note that start expects you to tell which container you want to start. Containers may be identified either by their id (first column of docker ps) or by their name (last column). You may provide only the first digits of the id (as long as there is no ambiguity). Examples:

    docker start -i 85bcdca6c38f
    docker start -i 85bcd
    docker start -i 85
    docker start -i 85bcdca6c38f07e3f8140cbf8b4ad37fd80d731b87c6945012479439a450a443
    docker start -i pensive_hodgkin
    

  8. (commit) You can modify files inside a container. If you restart the same container you can note that these changes are still present. However they will not be present in the other container (even if they are running the same image) because docker uses a copy-on-write filesystem. Use the command docker diff to show the difference of a container from its image.

    Remember that all changes inside a container are thrown away when the container is removed. If we want save a container filesytem for later use, we have to commit the conainer (i.e take a snapshot).

    docker commit CONTAINER
    

    This operation creates a new image (visible in docker images). This image in turn can be used to start a new container.

    Note: docker commit does not affect the state of the container. If it is running, then it just keeps running. You may take as many snapshots as you like.

  9. (rm) You now have too many dead containers in your engine. You should use docker rm to remove them. Alternatively you can run docker container prune which removes all dead container.

  10. (extras) If you still have extra time, you can experiment

    • the other docker run options we introduced so far:
      • --rm to remove the container automatically when it terminates
      • -d/--detach to run a container in the background
      • -u/--user to run the container as a different user
      • -w/--workdir to start the container in a different directory
      • -e/--env to set an environment variable
      • -h/--hostname to set a different hostname (the host name inside the container)
      • --name to set a different name (the name of the container in the docker engine)
      • also you may type docker run --help to display all configuration keys
    • other docker commands (note: some of these commands require the container to be running, just launch docker run -d -t -i debian to have one that keeps running in the background)
      • docker inspect to display the metadata of a container (json format)
      • docker cp to transfer files from/into the container
      • docker exec to have launch a separate command (very useful for providing a debugging shell -> docker exec -t -i CONTAINER bash)
      • docker top to display the processes running inside the container
      • docker stats to display usage statistics
      • docker logs to display the container output
      • docker attach to reattach to the console of a detached container

Part 2. Docker volumes

  1. (external volume) Run a container with -v/--volume to mount an external volume.

    Eg: mount the /tmp/myvol from the host machine at /myvol inside the container:

    docker run --rm -t -i -v /tmp/myvol:/myvol ubuntu
    

    Note: on Windows/MacOS over the Docker Toolbox, your docker engine is running inside a virtual machine. This means /tmp/myvol refers to the /tmp/myvol path inside the VM. You can mount directories from your host system only if they are shared with the VM. By default the toolbox is configured to share the Users directory inside the VM:

    • on Windows C:\\Users is mounted as /c/Users
    • on MacOS /Users is mounted as /Users

    Thus you can mount a directory from these places, eg:

    docker run --rm -t -i -v '/c/Users/NAME/My documents/myvol:/myvol' ubuntu

    docker run --rm -t -i -v '/Users/NAME/Documents/myvol:/myvol' ubuntu

    Once the container is started, you can note that the directory /tmp/myvol is mounted inside the container at /myvol. If you create files there they will be visible on both sides, and they will persist if the container is removed. You can remove your container and create a new one with the same parameters to check that.

    This way of using an external volume is a direct mount. The docker engine will not care about the management of this directory (apart from creating it).

  2. (named volume) Alternatively we can create a named volume. That is a volume managed by docker (and by stored by default in /var/lib/docker/volumes). A named volume is a volume that does not start with /. Example:

    docker run --rm -t -i -v my-named-volume:/myvol ubuntu
    

    This named volume is persistent of course. It is managed separately from the containers with the docker volume command, eg:

    docker volume ls
    docker volume rm my-named-volume
    

Part 3. Dockerfile

  1. (Building images) We will now play with Dockerfiles.

    Choose a nodejs application from the ones that we built through out our course and create a docker image for it.

    • The image should use a nodejs docker image
    • Set the working dir to be /app
    • Copy the source code to the working directory
    • Install the necessary packages
    • Expose the necessary ports
    • Start the application
    • Build the image using docker build ... command
    • Test that it works perfectly
    • Publish the image to Dockerhub
    • Try to download each-others images and run them

Part 4. Docker networking

  1. (running a server) We will now play with the network.

    Pull the nginx:stable-alpine image (nginx is a web server, and alpine is a very lightweight linux distribution based on the musl C library+busybox and a popular solution for building small docker images).

    docker pull nginx:stable-alpine
    

    Our goal here is to run a http server to serve some content (ex: the package documentations on our machine in /usr/share/doc). Before starting the container we need to know how to configure it, especially we need to know where the served directory must be mounted.

    The documentation of the nginx image says the content is to be located in /usr/share/nginx/html. So let's mount our doc directory at this place. Note that nginx does not require write access on these files, therefore it is a good idea to append :ro to make the mount read-only.

    docker run -d --name nginx -v /usr/share/doc:/usr/share/nginx/html:ro nginx:stable-alpine
    

    The server is now running somewhere in a container. Since this container has a separate network stack, it has a different IP address. There are multiple ways to obtain this IP address:

    • inspect the container metadata:

      docker inspect nginx | grep IPAddress
      
    • run a command inside the container:

      docker exec nginx ip addr show dev eth0
      

    Once you know this address you can open it in your web browser -> http://172.17.x.x/. Unfortunately the nginx image is compiled without the autoindex module so it will not display directories without a index.html file.

    Just find a html document with the following command and append it to your url to see if it works.

    (cd /usr/share/doc && find * -name index.html)
    
  2. (publish) We have confiremed that we are able to run a HTTP server inside a container and serve some content. However this container is in a private network (172.17.0.0/16), it is not reachable from the public.

    To make it reachable we have to publish the HTTP port (tcp/80) on host machine (which may have a public IP address). We add a -p/--publish option:

    docker run -d --name nginx -v /usr/share/doc:/usr/share/nginx/html:ro -p 80:80 nginx:stable-alpine
    

    Then the server should be reachable at http://localhost/. Note: the command will fail if you already have a server using the port 80 of your machine. If this happens, you may specify an alternate port, eg: -p 1234:80 the server will be reachable at http://localhost:1234/

  3. (legacy links) Our nginx server is reachable from outside. Another use case would be to make a server reachable from another container (for example, a web application in a server may want to use a database hosted in another container).

    To test this feature we run a busybox container (it's lightweight and it provides the wget http client)

    docker run --rm -t -i --link nginx:http-server busybox
    

    This makes the nginx container reachable from this new container under the alias http-server. Inside the busybox container we make a HTTP request with wget:

    wget http://http-server/
    
  4. (user-defined network) Legacy links are deprecated and that is unfortunate. The alternative is have the two containers connected to the same internal network.

    By default they are launched on the bridge network. Depending on the configuration of your daemon (in /etc/docker/daemon.json), inter-container communications may or may not be authorized on the default bridge (in case we have icc: false).

    If icc is enabled, we can already communicate between the containers (using the container name as TCP/IP destination).

    docker run --rm -t -i busybox wget http://nginx/
    

    In a production context, to improve the security, it would be preferable to put unrelated containers in separate networks.

    To test this we will create a dedicated network named ngnet, to let our two containers communicate privately.

    docker network create ngnet
    

    We can display its config (and especially observe that it uses a different IP prefix) with:

    docker network inspect ngnet
    

    At container creation time, we can use -n/--net to select a specific network (instead of the default bridge). But it is also possible to connect and disconnext dynamically the containers to the networks (especially to allow having a container connected to multiple network).

    We connect our nginx container to the ngnet network:

    docker network connect ngnet nginx
    

    The container is now connected to the two networks (bridge on eth0 and ngnet on eth1). We can verify this:

    docker exec nginx ip addr
    

    We can now run our busybox container with --net ngnet to be on this network:

    docker run --rm -t -i --net ngnet busybox wget http://nginx/
    
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment