Skip to content

Instantly share code, notes, and snippets.

@stormwild
Last active August 25, 2022 08:28
Show Gist options
  • Save stormwild/b36fe548f1fdba888c0bc8f8a9a7e8fa to your computer and use it in GitHub Desktop.
Save stormwild/b36fe548f1fdba888c0bc8f8a9a7e8fa to your computer and use it in GitHub Desktop.
Notes on Docker

Docker Notes

Notes on Docker

host machine (Mac or Windows) -> 
  virtual machine (VirtualBox) -> 
    docker images (layered file system) -> 
      docker containers (running images)
docker tools installed on host or client machine -> 
docker-machine -> controls virtual machines
docker (client) -> connects to-> docker daemon (running on virtual machine)
docker daemon -> runs containers

Docker Machine

docker-machine is the command line tool that is run from the terminal on the host os.

Similar to Vagrant, it controls the virtual machines on your host.

docker-machine ls // Lists virtual machines including info regarding their ip, status
docker-machine start default // Starts the default machine
dockeer-machine stop default // Stops the default machine
dockeer-machine ip default // Displays the machine ip

Notes on using the Terminal

On a host os such as Mac or Windows, when the docker tools are installed it also installs? or uses VirtualBox.

With Docker Toolbox on a Mac, using the docker quickstart terminal, the terminal env is configured to launch and connect to a default virtual machine.

Update 2018-02-18:

Docker for Mac differs from the Docker Toolbox and no longer makes use of a default machine and uses Hyperkit instead of Virtual Box.

Docker for Mac vs. Docker Toolbox

When issuing docker-machine commands from a host terminal, the terminal environment needs to be setup so that the terminal knows how to connect to the default or particular machine.

Run docker-machine env default to see environment info for the default machine.

export DOCKER_TLS_VERYIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="Users/Dan/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
# Run this command to configure your shell:
# eval "$(docker-machine env default)"

To configure the shell for the default machine run eval "$(docker-machine env default)"

Running docker client commands from the configured shell, such as docker ps will now be run from the connected default machine.

Docker Client

The docker client is a command line tool that allows you to interact with the docker daemon on particular virtual machine and control images and containers.

For a particular shell window, docker client commands can be issued to the connected machine.

The docker daemon or engine on the target machine receives and executes the commands from the docker client.

docker pull imagename // pull down an image (from a remote repository)
docker run imagename // run an image, create a container instance based on the image
docker images // lists the images available on the machine
docker ps // lists the running containers
docker ps -a // lists all the containers

docker ps will display the container id, image, command, name, created and status.

docker rm id or docker rm name will remove the container with the specified id (i.e. hash such as 59f...) or name

docker images will display the images repository, tag, image id, created and virtual size

docker rmi id will remove the specified image with id (hash)

docker run -p 8080:80 kitematic/hello-world-nginx runs a container based on the kitematic image but also tells the machine to map port 8080 on the host machine to port 80 of the running container.

Docker Image

A docker image is a layered filesystem. Running containers provides a read/write filesystem layer.

Docker Compose

Tool to manage multiple containers and the application lifecycle.

Running containers can be considered services.

docker-compose.yml allows you define application services (containers)

With docker-composer you build start-up and tear down services.

Docker Compose File

The docker-compose.yml can be run through a docker-compose build command to generate services (images)

version: '2'
services:
  node:
    build:
      context: .
      dockerfile: node.dockerfile
    networks:
      - nodeapp-network
  mongodb:
    image: mongo
    networks:
      - nodeapp-network
      
networks:
  nodeapp-network
    driver: bridge

Docker Compose Commands

docker-compose build
docker-compose up
docker-compose down
docker-compose logs
docker-compose ps
docker-compose start
docker-compose rm
docker-compose pull

Build - build or rebuild services defined in docker-compose.yml

docker-compose build or docker-compose build mongo builds services defined in docker-compose.yml

Up - create and start containers

docker-compose up -d - -d runs in daemon mode

docker-compose up --no-deps node - Start without dependencies

Down - take all of the containers down (stop)

docker-compose down

docker-compose down --rmi all --volumes - removes all images and volumes

Custom docker-compose.yml file

version: '2'
services:
  node:
    build:
      context: .
      dockerfile: node.dockerfile
    ports:
      - "3000:3000"
    networks:
      - nodeapp-network
  mongodb:
    image: mongo
    networks:
      - nodeapp-network
      
networks:
  nodeapp-network
    driver: bridge

Logs

docker-compose logs shows the logs for all running containers

Custom docker-compose.yml file

version: '2'
services:
  web:
    build:
      context: .
      dockerfile: aspnetcore.dockerfile
    ports:
      - "3000:3000"
    networks:
      - aspnetcoreapp-network
  postgres:
    image: postgres
    environment:
      POSTGRESS_PASSWORD: password
    networks:
      - aspnetcoreapp-network
      
networks:
  aspnetcoreapp-network
    driver: bridge

Running commands

Commands you run in a container persist in the container when you rerun the container.

You can commit these changes and save the container as a new image.

When you use docker run to start a container, it actually creates a new container based on the image you have specified.

Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.

docker ps shows you only running docker containers. docker ps -a shows you also the ones that have exited -- and that you can keep running. A commit is only necessary after each run if you want to make a snapshot there for future use, otherwise the container itself will stick around for you to keep using.[1]

docker commit <container_id> new_image_name:tag_name(optional)[2]

Resources

Cool Resources:

Docker Context

Reference:

How to deploy on remote Docker hosts with docker-compose

  1. Using docker contexts
$ docker context ls
NAME   DESCRIPTION   DOCKER ENDPOINT   KUBERNETES ENDPOINT   ORCHESTRATOR
…
remote               ssh://user@remotemachine
$ cd hello-docker
$ docker-compose ‐‐context remote up -d

Docker Contexts are an efficient way to automatically switch between different deployment targets. We will discuss contexts in the next section in order to understand how Docker Contexts can be used with compose to ease / speed up deployment.

Docker Contexts

A Docker Context is a mechanism to provide names to Docker API endpoints and store that information for later usage. The Docker Contexts can be easily managed with the Docker CLI as shown in the documentation.

Create and use context to target remote host

To access the remote host in an easier way with the Docker client, we first create a context that will hold the connection path to it.

$ docker context create remote ‐‐docker “host=ssh://user@remotemachine”
remote
Successfully created context “remote”

$ docker context ls
NAME      DESCRIPTION            DOCKER ENDPOINT    KUBERNETES ENDPOINT     ORCHESTRATOR
default * Current DOCKER_HOST…   unix:///var/run/docker.sock                swarm
remote                           ssh://user@remotemachine

Make sure we have set the key-based authentication for SSH-ing to the remote host. Once this is done, we can list containers on the remote host by passing the context name as an argument.

$ docker ‐‐context remote ps
CONTAINER ID    IMAGE   COMMAND   CREATED   STATUS   NAMES

We can also set the “remote” context as the default context for our docker commands. This will allow us to run all the docker commands directly on the remote host without passing the context argument on each command.

$ docker context use remote
remote
Current context is now “remote”
$ docker context ls
NAME      DESCRIPTION             DOCKER ENDPOINT    KUBERNETES ENDPOINT    ORCHESTRATOR
default   Current DOCKER_HOST …   unix:///var/run/docker.sock               swarm    
remote *                          ssh://user@remotemachine

docker-compose context usage

The latest release of docker-compose now supports the use of contexts for accessing Docker API endpoints. This means we can run docker-compose and specify the context “remote” to automatically target the remote host. If no context is specified, docker-compose will use the current context just like the Docker CLI.

$ docker-compose ‐‐context remote up -d
/tmp/_MEI4HXgSK/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 10.0.0.52: b’047f5071513cab8c00d7944ef9d5d1fd’
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1  … done
Creating hello-docker_frontend_1 … done

$ docker ‐‐context remote ps
CONTAINER ID   IMAGE                  COMMAND                 CREATED          
  STATUS          PORTS                  NAMES
ddbb380635aa   hello-docker_frontend  “nginx -g ‘daemon of…”  24 seconds ago
  Up 23 seconds   0.0.0.0:8080->80/tcp   hello-docker_web_1
872c6a55316f   hello-docker_backend   “/usr/local/bin/back…”  25 seconds ago
  Up 24 seconds                          hello-docker_backend_1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment