Skip to content

Instantly share code, notes, and snippets.

@nick-brown
Created June 18, 2018 23:14
Show Gist options
  • Save nick-brown/5a43ad90421710c12de0e9d0abf8ee0a to your computer and use it in GitHub Desktop.
Save nick-brown/5a43ad90421710c12de0e9d0abf8ee0a to your computer and use it in GitHub Desktop.
Dockery Master Notes

Basics

  • engine/server (daemon)

    • docker version or docker info
  • Image: application we want to run

    • hub.docker.com (default image registry)
  • Container: the instance of that image running as a process

    • many containers can run off of the same image

docker container run --publish 80:80 --detach nginx:1.11

  • --publish or -p
  • detach runs in the background (or -d)
  • 80:80 maps container port 80 to localhost port 80
  • :1.11 specifies a version but defaults to latest if not specified

docker container ls or docker ps will list running containers docker container logs <name> will spit out logs for detached container docker container top docker container rm <name1> <name2> optional -f flag to remove a container that is currently running

What happens in docker container run?

  • looks for image locally first, then to default registry
  • creates new container based on that image
    • NOTE: the powerful part of docker is that it's not cloning the image and starting a new one, it's simply creating a new layer of changes on top of the existing image so there is no duplication
  • gives a virtual IP on a private network inside the docker engine
  • opens up port 80 and forwards to port 80 in the container
  • starts the container by using the CMD in the image Dockerfile

Containers aren't mini vms

  • just (restricted) processes
  • limited to what resources they can access
  • exit when the process stops
  • can see the process running on the host machine (ps aux | grep <name>)

Environment var passing

  • when running mysql use the --env or -e option to pass in MYSQL_RANDOM_ROOT_PASSWORD=yes

docker image ls to show all local images

Seeing what's going on inside of containers

  • docker container top - process list in a single container
  • docker container inspect - details of one container config
  • docker container stats or docker stats - all container streaming stats

Shell inside containers

  • docker container run -it - start a new container interactively (t is a pseudo-tty and i is interactive)
    • docker container run -it --name proxy nginx bash
      • normally containers will immediately run the command of the process they're starting (such as nginx) but in this case the command that was run is bash, which can be seen by looking at docker container ls -a
      • when you exit the shell the container stops because the startup command stopped
    • NOTE: mutually exclusive with --detach/-d
    • in the case of an image like ubuntu, bash is its default startup command
      • distros will have be a very minimal install
    • to reconnect to a stopped interactive container pass -ai (attach)
      • docker container start -ai ubuntu
  • docker container exec -it - run additional command in existing container
    • this will allow you to run a shell inside of an existing container, like mysql
    • docker exec -it <name> <new_command>
    • docker exec -it mysql bash
    • docker exec -u root -it <container_name> <new_command>

Image Management

  • docker pull <image_name> - fetches the image
  • docker image ls - show all locally cached images

Docker Network Concepts

  • docker container run -p - exposes default ports on the physical network

  • docker container port <name> - check ports

  • Each container connected to a private virtual network called "bridge"

  • each virtual network routes through the NAT firewall on host IP (docker daemon configuring the firewall on the host inferface so the containers can get out to the internet)

  • All containers on a virtual network can talk to each other without exposing ports with -p

    • Best practice here is to have a separate virtual network for each app
      • app1 for mysql, php, and apache
      • app2 for mongo and nodejs containers
    • containers can be connected to zero or more virtual networks
  • skip virtual networks and use host IP (--net=host)

  • can use different Docker network drivers to gain new abilities

  • docker container inspect --format '{{ .NetworkSettings.IPAddress }}' nginx

    • easier then grep when you learn the format of the container files
    • get the actual IP of the container
    • 172.17.0.4 which is a different subnet than local
      • ifconfig en0 shows me the host is a 192.168 subnet
    • can also be used on services
  • docker network create my_app_net - creates a new virtual network to attach containers to

CLI Management

  • docker network ls - list networks
  • docker network inspect
  • docker network create --driver
  • docker network connect - attach a network to a container
  • docker network disconnect - detach a network from a container

Accessing container files

  • sudo docker cp <container_name>:/etc/nginx/nginx.conf ./ copies the /etc/nginx/nginx.conf file in the container to the current directory on the host machine
  • `docker container run --name nginx

Data Storage

  • Volumes: make special location outside of a container's unified file system
    • VOLUME /var/lib/mysql
    • stored in /var/lib/docker/volumes on the host (in a VM on Mac and Windows) and bound to /var/lib/mysql in the container
    • docker volume ls
    • needs to be destroyed separately from the container (for insurance)
      • docker volume prune
    • can name volumes with docker run -d --name mysql -v mysql-db:/var/lib/mysql:ro mysql
      • :ro makes it read-only
  • Bind Mounts: link container path to host path
    • can't use in Dockerfile, must be at container run
    • docker container run -d --name mysql -v /Users/nickbrown/data:/var/lib/mysql

Docker Compose

  • configures relationships between containers
  • save docker container run settings in file
  • made up of two separate but related things
    1. yaml file describing options for containers/networks/volumes
    2. a cli tool (docker-compose) used for local dev/test automation along with the yaml files
  • docker-compose up to execute the docker-compose.yaml in the current directory
  •   version: '3.1'  # if no version is specificed then v1 is assumed. Recommend v2 minimum
    
      services:  # containers. same as docker run
        site-flask-app: # a friendly name. this is also DNS name inside network
          # image: flask-app # use a pre-built image
          build: ./ # build the image in the given directory then run
          ports:
              - '80:80'
          # command: # Optional, replace the default CMD specified by the image
          # environment: # Optional, same as -e in docker run
          # volumes: # Optional, same as -v in docker run
    
      # volumes: # Optional, same as docker volume create
    
      # networks: # Optional, same as docker network create
    

Docker Swarm

  • a set of nodes all running the same docker image

  • managers in a Raft consesus group and workers in a gossip network

    • ??? what does that mean ???
  • docker run was concerned with managing a single image, docker service replaces run and starts a swarm with an initial manager

    • allows us to add replicas known as tasks to
    • a single service can have multiple tasks, and each one of those tasks will launch a container
  • if we start an nginx service and tell it to create 3 replica nodes it will create a manager node running the nginx:latest image and spin up additional images (tasks) on available nodes (that don't already have the image running) in the cluster/swarm, up to the maximum of 3

  • to check if swarm is running, perform a docker info | grep [sS]warm

  • docker swarm init

    • performs PKI (Public Key Infrastructure)
      • root signing certificate created for the new swarm
      • certs issues for first manager node
      • join tokens are created (which are used to join other nodes in the swarm)
    • Raft database created to store root CA, configs, and secrets
      • Raft is a protocol used to enforce consistency across nodes in a cluster
      • encrypted by default on disk
      • prevents need for another key/value system to hold orchestration secrets (why are we using Ansible here?)
        • how is this provisioned within swarm?
      • replicates logs amongst managers via mutual TLS in the control plane
  • docker node ls to list nodes

    • there can be only one leader at a time
    •   ID                            HOSTNAME                STATUS              AVAILABILITY        MANAGER STATUS
        lqeiauxsuqvgah1cpj22m5g3x *   linuxkit-025000000001   Ready               Active              Leader
      
  • docker service create alpine ping 8.8.8.8 to tell the swarm to create a new task

    • will return a service id and assign it a random name much like containers
    • docker service ls to see a list of all services
    •   ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
        xsmfp3su05zw        nifty_dijkstra      replicated          1/1                 alpine:latest       
      
    • docker service ps nifty_dijkstra to see the actual container which will additionally tell you what node the task/container is running on
    •   ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE           ERROR               PORTS
        kye1ugk97sxp        nifty_dijkstra.1    alpine:latest       linuxkit-025000000001   Running             Running 4 minutes ago                       
      
  • docker service update nifty_dijsktra --replicas 3 to scale up the cluster

    • another docker service ps nifty_dijsktra will show the additional tasks
    •   ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE            ERROR               PORTS
        kye1ugk97sxp        nifty_dijkstra.1    alpine:latest       linuxkit-025000000001   Running             Running 6 minutes ago                        
        u59hhh38smza        nifty_dijkstra.2    alpine:latest       linuxkit-025000000001   Running             Running 29 seconds ago                       
        3d47i7q8xv1y        nifty_dijkstra.3    alpine:latest       linuxkit-025000000001   Running             Running 29 seconds ago                       
      
  • docker update will allow you to update configuration (cpu/ram/etc) of a single container, whereas service will let you control the effects across the entire swarm in a way that ensures consistent availability

  • docker container rm -f nifty_dijkstra.2.u59hhh38smzasaigyfsh1xkgr to remove a single task out of a swarm and docker service ls will show one removed and then shortly after that the swarm brought up a new instance

    • docker service ps nifty_dijsktra will show the entire history of tasks that failed and were replaced by starting a new container
    •   ID                  NAME                   IMAGE               NODE                    DESIRED STATE       CURRENT STATE            ERROR                         PORTS
        kye1ugk97sxp        nifty_dijkstra.1       alpine:latest       linuxkit-025000000001   Running             Running 18 minutes ago                                 
        j07xza549lkg        nifty_dijkstra.2       alpine:latest       linuxkit-025000000001   Running             Running 2 minutes ago                                  
        u59hhh38smza         \_ nifty_dijkstra.2   alpine:latest       linuxkit-025000000001   Shutdown            Failed 2 minutes ago     "task: non-zero exit (137)"   
        3d47i7q8xv1y        nifty_dijkstra.3       alpine:latest       linuxkit-025000000001   Running             Running 12 minutes ago                                 
      
  • docker service rm nifty_dijkstra to remove the entire service -- it will take a few seconds to clean up the task containers

Questions

  1. What are the various different network drivers e.g. bridge
  2. How are virtual network specifically different from subnets
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment