Skip to content

Instantly share code, notes, and snippets.

@wael34218
Last active December 16, 2021 16:23
Show Gist options
  • Save wael34218/e3643b3e04322b7b9deb65e3b675f219 to your computer and use it in GitHub Desktop.
Save wael34218/e3643b3e04322b7b9deb65e3b675f219 to your computer and use it in GitHub Desktop.

Docker

docker <management_command> <sub_command>

Docker hub (https://hub.docker.com/) has a lot of community and official the docker images

Containers

Start a docker container

docker container run --publish 80:80 [--name <container_name] [--detach] [--rm # remove if it already exists]
         [--network <network_name>] [--net-alias <alias_name>] [-v /host/path:/container/host] [--env MYSQL_RANDOM_ROOT_PASSWORD=yes] nginx [CMD]
  • [--publish|-p] expose the ports to the network. Without -p it will connect to Docket local bridge network not exposed
  • You can put multiple containers on same virtual network this way they can connect to each other without using bridge Creating containers with alias names are good to have DNS load balancing
docker container ls
docker container ls -a
docker container logs <container_name>
docker container stop <ID>
docker container start <ID>
docker container rm [-f] <ID>

List all the process inside the container

  • docker container top <container_name>
  • docker container inspect <container_name>

Check stats:

docker container stats

run new instance interactively:

docker container run -it # run new instance interactively
docker container exec -it # run additional command inside a container, creates a new process on top of the instance
docker container start -ai # Start an existing container with interactive

Commit an container to make an image:

docker container commit <container_name> <image_name>

Almost all images have bash command that allows you to go inside the container

docker container exec -it -name web nginx bash

  • i for interactive
  • t for tty (psuedo)

docker container run -it alpine bash

Alpine is very small version of Ubuntu error because it doesnt have bash but has sh which is a mini version of bash

docker container run -it alpine sh

Lists all the ports are open on the network:

docker container port <container_name>

Network

If two machines are on the same network [--network <network_name>] then they will be able to connect, to check:

docker container exec -it <container_name> ping <container_name2>

Ping might not be supported by ubuntu image You can also connect containers using [--link] but it is much easier to create a network and assign all containers to it.

docker network create <network_name>

Images

List all images:

docker image ls

Pulls latest changes from hub:

docker image pull <image_name>

docker image pull <image_name>:<version>

Each image is made from a set of layers, each layer has its own SHA1. If you install 2 images they might reuse some of the layers. To display layers of an image:

docker image history <image_name>

A container is merely a process of that image with an extra layer that caches the file differences done on that container.

To get the meta data of an image run (to list prots, volumes, layers ... etc.):

docker image inspect <image_name>

To add a new tag to a Docker image:

docker image tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

If you did not specify the tag, then tag will be latest by default.

Then you can push the new image/tag into docker hub using:

docker image push TARGET_IMAGE[:TAG]

You can login/logout using

docker login
docker logout

Docker files

Docker file is a recipe for creating your image.

by default docker will build Dockerfile. Each command in the Dockerfile adds a new layer.

1- FROM debian:jessie Sets the minimum requirements (starting point) of your image. You inherit everything from that image (except ENV) so CMD command could be specified by this image, you dont have to specify it in your Dockerfile if you are using the same command.

2- ENV ENVIRONMENT_VARIABLE_NAME value main way of setting keys and values in container building.

3- RUN [bash command && another bash command] to execute bash inside the container. Add multiple commands in one RUN stanza to make it add one layer to save space and execution time. For installing git for example type:

RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*

Last one will remove apt cache which can take up to 10MB which will be stored in your image - better get rid of it. When doing git clone make sure you get the last commit only --single-branch --depth 1 to save time and space.

For logging forward all the log files you need into the standard output and standard error, docker will handle the rest for you.

RUN ln -sf /dev/stdout /var/log/SERVICE/access.log
    && ln -sf /dev/stderr /var/log/SERVICE/error.log

4- EXPOSE 80 443 to open ports for the container.

5- Finally, the command to run when launching the container RUN ["nginx", "-g", "daemon off;"]

6- WORKDIR /path/to/directory it is basically a cd command. Easier to describe in Dockerfile

7- COPY local_file container_file

Rule of thumb, keep the things on top of the docker file that change the least. Because once a layer has changed, every line executed after will have to be rebuilt. Things that change the most should be at the bottom of your dockerfile.

To build docker image use:

docker build [-f filename] -t wael34218/customimage:tag_name path/to/build/directory

The path is usuall current directory .

Persistent Data

You dont want to store database inside the container, it should be visibile from the machine. "separation of concerns" be able to update the image of the container while still working on the same database. We can solve this issue by one of two solutions:

1- Volumes

Make special location outside the container Union File System (UFS) - container sees them as local files. From the Dockerfile, you can create a volume using:

VOLUME /var/lib/mysql

Volumes need manual deletion, they are not deleted with container. docker image inspect will show you the volumes used by the image.

docker volume ls and docker volumne inspect HASH gives you information about your volumes.

Start a container with named volume:

docker container run --name mysql -v mysql-db:/var/lib/mysql [--env MYSQL_RANDOM_ROOT_PASSWORD=yes] mysql [CMD]

To create a volume without the container you can use:

docker container create [OPTIONS] [VOLUME]

2- Bind Mount

Maps a host file or directory to a container file or directory. Skips the UFS which will outlive the container. Cannot be created from inside the Dockerfile, have to be created using container run command.

docker container run ... -v /path/in/host:/path/in/container

Docker Compose

To configure relationships between containers. docker-compose command uses docker-compose.yml as default filename, you can specify different file using -f flag.

Example on docker compose file:

version: '3'

services:
  db:
    image: mariadb
    environment:
         MYSQL_ROOT_PASSWORD: example
         SHOW: 'true'
    volumes:
         - ./mysql-data:/var/lib/mysql
  web:
    build:
         context: .
         dockerfile: web.Dockerfile
    command: python3 manage.py runserver 0.0.0.0:8000
    image: custom-image
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db

Those containers are going to be ran on the same network so you do not need to configure a network between them. Hostname of each service is the name of the service, in the above example hostnames are web and db. Docker compose should not be used for production, only for local development.

The two most comonly used commands:

  • docker-compose up [-d] [--build]: Sets up Volumes, Networks and start containers. -d to deamonize. --build to rebuild images existing in the compose file.
  • docker-compose down [-v] [--rmi (local|all)]: Stops all containers and removes volumes, networks and containers. -v to remove volumes as well. --rmi to remove images built within the docer compose.
  • docker-compose logs: to see the docker-compose logs if it was running in daemon mode.
  • docker-compose --help: It looks very similar to docker commands as it it just a wrapper and eventually calling docker server commands.
  • docker-compose (ps|top): To list all processes running.

Swarm Mode

Built in for cluster orchistration for deployment and managing release cycles.

ROC Curve

  • A swarm is clustering and scheduling tool for Docker containers.
  • A single service can have multiple tasks.
  • Each task launches a container.
  • Worker node is a host that runs a task.
  • Manager node is a host that accepts command from client, assigns nodes to tasks, allocates IPs for tasks and checks on workers.

docker swarm init: Start a single node swarm:

  1. It creates Root Signed Certificate
  2. Certificate for first manager node
  3. Join tokens are created
  4. Raft database is created to store the Root CA
  • docker swarm [init|join|join-token|leave|unlock|unlock-key|update]

init comes with --advertise-addr 127.X.X.X for other nodes to see it.

  • docker node [promote|demote|inspect|update|rm|ps|ls]

  • docker service [create|inspect|logs|scale|update|ls|ps|rm]

--replicas define the number of tasks to run. --name to specify the name of the service.

When creating a network for swarm use docker network create --driver overlay NAME which facilitates communication intra swarm nodes.

Routing mesh allows incoming packets to be forwarded to a proper task:

Routing Mesh

This is a stateless load balancer operating on Layer 3 (TCP). If you want stateful, then you can add HAProxy or Nginx into your swarm.

service do not use volume, so we have to use --mount option when creating services with persistent data, example:

docker service create --name db --network mynetwork --mount type=volume,source=db-data,target=/var/lib/postgresql/data postgres:9.4

Creating virtual docker machines for testing Swarm:

docker-machine create node1 then you can ssh using docker-machine ssh node1. This solution needs VirtualBox to be installed.

Docker Stacks

Adds a layer of abstraction that accepts compose files, it is a yaml file that contains commands to initialize:

  • Services
  • Volumes
  • Networks

To create a new stack run the command: docker stack deploy -c my_compose_file.yml stack_name, but it cant do build. If you want to update the stack, you can edit the yaml file then run the same command. It will automatically know that you only want to update.

docker stack [deploy|services|ps|rm|ls]

Stack file is similar to compose file except that it has deploy keyword added to it.

version: '3'  # has to be 3.1 to support secrets.

services:
  db:
    image: mariadb
    secrets:
      - db_pass
    environment:
       MYSQL_ROOT_PASSWORD: /run/secrets/db_pass
       SHOW: 'true'
    volumes:
       - mysql-data:/var/lib/mysql
    networks:
       - mynetwork
    ports:
       - 6543:6543
    deploy:
       replicas: 2
       update-config:
         parallelism: 2
         delay: 10s
       restart-policy:
          condition: on-failure
          delay: 10s
          max_attempts: 3
  web:
    command: python3 manage.py runserver 0.0.0.0:8000
    image: custom-image
    volumes:
      - web-code:/code
    ports:
      - "8000:8000"
    depends_on:
      - db
    deploy:
      placement:
        constraints: [node.role == manager]

volumes:
  web-code:
  mysql-data:

secrets:
  db_user:
    file: ./db_user.txt
  db_pass:
    external: true

Storing Secrets

Easiest way to for storing secrets in Swarm, it can store:

  • usernames and passwords
  • TLS certificates
  • SSH keys

A key value store, where the key is the name of the file and the value is what is in it. These files are stored in Swarm and assigned to services, so only containers of those services can use them. If you do not have swarm you cant use secrets. Secret files are stored inside the container of the service under /run/secrets/NAME.

  • docker secret create db_user db_user.txt : Read username from the file
  • echo "myPassWord" | docker secret create db_pass - : Read password from standard input
  • docker secret [create|inspect|ls|rm]

docker create service --name mydb --secret db_pass --secret db_user -e POSTGRES_PASSWORD_FILE=/run/secrets/db_pass -e POSTGRES_USER_FILE=/run/secrets/db_user postgres

Lifecycle

We can create multiple compose files:

  • docker-compose.yml: Contains your base configuration.
  • docker-compose.override.yml: Is the default override that is used when running docker compose up.
  • docker-compose.test.yml: For Continuous Integration testing, you can add credentials for test database. To run this you have to add -f docker-compose.yml -f docker-compose.test.yml in that order.
  • docker-compose.prod.yml: Here we need to create a file that is ready for production using docker compose -f docker-compose.yml -f docker-compose.prod.yml config > production-file.yml

Service updates

Swarm updates services using rolling updates (one by one replacing each container).

  • docker stack deploy [NAME]: Used to create swarm and update it. Read through the file and update what is necessary.
  • docker service update --add-env NODE_ENV=production --publish-rm=8080: Add environment variable and remove a port.
  • docker service scale web=8 api=6: Scale replicas of 2 services web and api.

Health Check

It exec a command that returns 0 (OK) or 1 (Error). Not an external monitoring replacement. A container has 3 states starting, healthy and unhealthy. Services will try to replace unhealthy containers, if the update did not produce a healthy container it will not continue updating other containers.

docker run -d --name db --health-cmd "curl -f http://localhost:9200/_cluster/health || false" --health-interval=5s --health-timeout=3s --health-period=30s --health-retries=3 elasticsearch:2

Or you can use it inside the dockerfile:

HEALTHCHECK --interval=5s --timeout=3s CMD curl -f http://localhost/ || false

Or inside compose/stack:

version: '3'  # has to be 3.1 to support secrets.

services:
  web:
    image: mariadb
    healthcheck:
      test:["CMD", "curl", "-f", "http://localhost"]
      interval: 1m30s
      timeout: 10s
      retries: 3
      start_period: 1m

You can see the health status using docker container ls

Container Registry

  • hub.docker.com: Image registry, simple image building and can link your work from GitHub/BitBucket to it using create automated build option. It has webhooks, new images that pushed into docker hub can then be pushed to jenkins or travisCI to have automated builds continue down the line.
  • store.docker.com: Certified images.
  • cloud.docker.com: Web based orchistration (swarm) system for your cluster. Building, testing and deploying paid service. Local Docker CLI can connect to a remote Swarm you've created on the cloud

Run a private registry

A server that is running on port 5000 on secure connection (HTTPS/TLS), you can push/pull images.

To run it use the command:

docker container run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry

Move an image into this registry:

  • docker image pull NAME
  • docker image tag 127.0.0.1:5000/NAME
  • docker image push 127.0.0.1:5000/NAME

Now you can pull it from the local registry using docker image push 127.0.0.1:5000/NAME

If you go to the url 127.0.0.1:5000/v2/_catalog you will see a json object that contains all the images stored in it.

Docker System

docker system df Shows how much space docker is using.

docker system prune Delete all not running images, networks and volumes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment