Skip to content

Instantly share code, notes, and snippets.

@sabahtalateh
Last active May 16, 2018 16:00
Show Gist options
  • Save sabahtalateh/58c1ae52e8779852ad49152c53745b98 to your computer and use it in GitHub Desktop.
Save sabahtalateh/58c1ae52e8779852ad49152c53745b98 to your computer and use it in GitHub Desktop.
Pluralsight Docker
## Swarm
# On one of manager nodes run
docker swarm init --advertise-addr 46.161.54.215:2377 --listen-addr 46.161.54.215:2377 # --advertise-addr 46.161.54.215:2377 - Which of this machine IP's will be used for swarm. --listen-addr 46.161.54.215:2377 - IP:Port to listen command for manager nodes.
# Then to generate command to connetc as a manager run.
docker swarm join-token manager
# To generate same command for worker run.
docker swarm join-token worker
# Then go to other nodes and run the commands that was generated by two previous commands.
# Don forget to add --advertise-addr {Machine_IP}:2377 --listen-addr {Machine_IP}:2377.
# Machine_IP - Ip that will be used to communicate with swarm.
# If not working try to allow 2377 port with firewall.
# Check your cluster with.
docker node ls # Can be run from manager nodes.
#ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
#5uefvteudtrhdaav29hes113y * barin_zver Ready Active Leader 18.03.1-ce
#1ya7gx9kkk1avl89nabztxalr mgr2 Ready Active Reachable 18.04.0-ce
#y055yq06ya7dpr0swl1rqfl0x mgr3 Ready Active Reachable 18.04.0-ce
#xj4unsr2jmfdft1euc48jp89m wrkr1 Ready Active 18.04.0-ce
# Last node is a worker, it has empty MANAGER STATUS.
# If you want a worker node to become a manager node then grab node id from
docker node ls
# And run
docker node promote {grabbed_node_id}
## Swarm Services.
# Creqte service from image (like a container that will be distibuted between the swarm nodes).
docker service create --name psight1 -p 8080:8080 --replicas 5 nigelpoulton/pluralsight-docker-ci
# After it you can run
docker service ls
# to check the status of the service.
# You can also check for containers statuses with
docker service ps psight1
# If you now will try to access service from some instances that it is not running on, you will be redirected to nodes that
# has this service with swarm built in load balancer.
# When shutting down some of the nodes, swarm will automatically handle it to keep number of replicas.
# To run rescale service use this.
docker service scale psight1=7
# Or you can use such command (previous is an alise for this).
docker service update --replicas 10 psight1
# !For now (18.03) when machine was shut down and up again, tasks will not be rebalanced to it.
# To remove the service use
docker service rm psight1
## Rolling updates.
# First of all create new overlay network for service.
docker network create -d overlay ps-net
docker network ls
# Then deploy service with in this new network.
docker service create --name psight2 --network ps-net -p 80:80 --replicas 12 nigelpoulton/tu-demo:v1
# To view statistics about service run
docker service inspect --pretty psight2
# To update images on service psight2.
# --update-parallelism - how many instances will be updated at once.
# --update-delay - how long to wait before update the next portion of instances.
docker service update --image nigelpoulton/tu-demo:v2 --update-parallelism 2 --update-delay 10s psight2
## Stacks and DABs
# To convert docker-compose.yml into DAB file run this:
docker-compose build
# Then push created containers to the dockerhub.
# After it replace build section with image you've just pulled to dockerhub and finally run.
docker-compose bundle
# It will create a dab file with declarations of all the services described in compose file.
# Swarm can not work with docker-compose.yml file it can only work with a dab file.
# Here is the example of using stack https://github.com/dockersamples/example-voting-app.
## Deep Dive.
# To list containers with its full hash run this command
docker image ls --digests
# FS Layers are stored under the /var/lib/docker/{fs_driver_name}
# Storage driver can be grabbed from
docker system info
# ..
# Storage Driver: overlay2
# ..
# With this command you can view the image history.
docker history redis:latest
# Dockerfile example
# As example we will use this repo - git clone https://github.com/nigelpoulton/psweb.git
# Here is a dockerfile example, it should be placed in the code root directory and be named [D|d]ockerfile.
#
# FROM alpine:3.7
#
# LABEL maintainer="sabahtalateh@gmail.com"
#
# RUN apk add --update nodejs nodejs-npm
#
# COPY . /src
#
# WORKDIR /src
#
# RUN npm install
#
# EXPOSE 8080
#
# ENTRYPOINT ["node", "./app.js"]
#
# To build this command, -t - tag image, . - current directory as a context for image.
docker image build -t psweb .
# To run container with new image run this command, -d - detached mode -p HOST_PORT:CONTAINER_PORT.
docker container run -d --name web1 -p 8080:8080 psweb
# Build context is the place where your code is located, so it can be even remote repo.
# In the following command the context is a remote repo.
docker image build -t psweb https://github.com/nigelpoulton/psweb.git
# Dockerfile now provides multistage builds here is a dockerfile example.
# Repo - https://github.com/nigelpoulton/atsea-sample-shop-app
#
# FROM node:latest AS storefront
# WORKDIR /usr/src/atsea/app/react-app
# COPY react-app .
# RUN npm install
# RUN npm run build
#
# FROM maven:latest AS appserver
# WORKDIR /usr/src/atsea
# COPY pom.xml .
# RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve
# COPY . .
# RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests
#
# FROM java:8-jdk-alpine
# RUN adduser -Dh /home/gordon gordon
# WORKDIR /static
# COPY --from=storefront /usr/src/atsea/app/react-app/build/ .
# WORKDIR /app
# COPY --from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar .
# ENTRYPOINT ["java", "-jar", "/app/AtSea-0.0.1-SNAPSHOT.jar"]
# CMD ["--spring.profiles.active=postgres"]
#
# The main purpose of this file is to build first two images basing on fat OS images with many dependencies
# and copy files that were built into the final image on last stage. Run with a standard command.
docker image build -t multistage .
## Containers.
# You can stop containers and start them again.
# If the container process with PID=1 knows how to handle termination signal then container will stop imidiatly
# in other case docker gives a continer 10 secs and then stops it.
docker stop {container_id}
docker start {container_id}
# After starting again you can exec a command like this. 4f - Container hash start, cat file - command to execute.
docker container exec 4f cat file
# You can use Ctrl + P + Q to leave the container without killing it.
# In Dockerfile you can specify CMD or ENTRYPOINT
# CMD - Runtime arguments override CMD.
# ENTRYPOINT - Runtime arguments will be appended to ENTRYPOINT.
# To view container port mapping use command
docker port {container_id_or_name}
## Swarm
# To make current node swarm manager, leader and Certificate Authority run
docker swarm init
# You can check for swarm mode with
docker system info
# To list swarm nodes use
docker node ls
# To generate joint tocker run
docker swarm join-token [manager|worker]
# And execute output on other node. After that you can check nodes in swarm with
docker node ls
# To change the worker token use
docker swarm join-token --rotate [manager|worker]
# After it the joint token will be rotated and you can not user the previous one.
# You can also take a look at the certificate[s]
openssl x509 -in /var/lib/docker/swarm/certificates/swarm-node.crt -text
# In the certificate you can find the following lines:
# Subject: O=z9vu6e6wy37ptcnq2gso64eez, OU=swarm-manager, CN=tkr1c07b7zusahjlrqaptqp83
# O[rganization]={swarm-cluster-id}, O[rganizational]U[nit] - role of the node, C[anonical]N[ame] - Cryptographic node ID.
# All of these values are equals with them analogs in docker system info.
# Join token consists of such parts:
# SWMTKN-1-5nwcou079lr7tlxb6osd6zejn3c9damo8zm9yzjcjqdtheyi7d-520herice6cajj5s4f10pk8up
# {SWMTKN - tells us that it is a swarm token}-1-{Cluste certificate hash}-{Hash that determines will the node be a worker or a manager}.
#
# If you want to restart a manager node to restore old backup or something else
# you should first enable autolock on it.
docker swarm init --autolock
# or if you already have running node
docker swarm update --autolock=true
# It will give you an unlock key, memorize it or save somewhere
# Then you can for example restart node with
systemctl restart docker
# After it you will need to unlock node with unlock key
docker swarm unlock
# After it you can use the node as before
# To update certificates expiration you can use this
docker swarm update --cert-expiry 48h
## Networking
# By default all container are attached to the bridge network also known as docker0 network (ip a s).
# To list available networks use this command
docker network ls
# To show full information about network you can inspect it
docker network inspect bridge
# You can also create a network with prefered network driver (-d)
docker network create -d bridge golden-gate
# And run a container inside in network
docker run --rm -d --name web -p 8081:80 --network golden-gate nginx
# This is how you can crete an overlay network,
# overlay network will available for every node in the swarm.
docker network create -d overlay overnet
# You can then from manager nodes create services like this
docker service create -d --name pinger --replicas 2 --network overnet alpine sleep 1d
# And all of them will be in the same overlay network
# On manager nodes the overlay network will be synced between each other, on worker it will be created on demand
# when there will be a container on a worker node, so if you loggen in inside of one of containers that is on overlay network
# then you can ping any other container on the overlay network, to see the container IP inspect a network or a container.
#
# Docker networking includes a Service Discovery (to locate nodes in swarm) and
# a Load Balancer (to access service from any node in a swarm).
# You can user service name to ping the service from inside the container, for example create two services in swarm
docker service create -d --name ping --network overnet --replicas 4 alpine sleep 1d
docker service create -d --name pong --network overnet --replicas 4 alpine sleep 1d
# The login into one of containers that is in ping service and ping 'pong' service.
docker exec -it {container} sh
ping pong # pingin the pong service
#
# Load Balancing
# The purpose is being able to access service from any node in swarm
docker service create --name web -d --network overnet --replicas 1 -p 8080:80 nginx
# You can inspect service with
docker service inspect web --pretty
# You will see that on every node in swarm ports will be mapped to the service
#...
#Ports:
# PublishedPort = 8080 # On Host
# Protocol = tcp
# TargetPort = 80 # In Container
# PublishMode = ingress
## Volumes
# You can create volume
docker volume create myvol
docker volume ls
docker volume inspect myvol
docker rm myvol
# You can specify volume on container creation, in such case docker will automatically create it in /var/lib/docker/bolumes/
docker container run -d -it --name voltest --mount source=ubervol,target=/vol alpine
# ubervol will be created if not exists.
# If stop container the volume will be kept, and it can be mounted to another container,
# volumes can even be mounted to many containers at once.
## Secrets
# Secrets can be used only for services (in swarm mode).
# Secret can be created from file using such command
# Secrets are kept in in-memory volume inside a container under the /run/secrets
# It also stored in swarm's Raft in encrypted form
docker secret create ninja-tuna ./secret
docker secret ls
# After that create a service and give it an access to a secret
docker service create -d --name secret-service --secret ninja-tuna alpine sleep 1h
# If then run
docker service inspect secret-service
# You will find such section
#...
#"Secrets": [
# {
# "File": {
# "Name": "ninja-tuna",
# "UID": "0",
# "GID": "0",
# "Mode": 292
# },
# "SecretID": "pwta7msvv93tzxbyg31nqs2kd",
# "SecretName": "ninja-tuna"
# }
#],
# After it you can log in to the container and view the secret
cat /run/secrets/ninja-tuna
# Also you can not delete the secrets that's in used now with
docker secret rm {secret}
# Secret max size is 500KB - 0.5MB.
## Stack
# Stack file is a compose file for services
# To deploy stack file run, app - application name
docker stack deploy -c stackfile.yml app
docker stack ls
# List service of the stack
docker stack services app
# List services with statuses
docker stack ps app
# To scale the some of the services of the stack use
docker service scale app_service={number_of_instances}
docker service inspect app_service --pretty
# Also you can update replicas value in stackfile and then run deploy again
docker stack deploy -c stackfile.yml app
## Docker EE
# To upload image to your private DTR first create repository on DTR
# and then retag image so it match {DTR_IP_OR_DNS_NAME}:{DTR_USERNAME}/{DTR_REPO_NAME}:{TAG}.
docker image tag {existing_image} {DTR_IP_OR_DNS_NAME}:{DTR_USERNAME}/{DTR_REPO_NAME}:{TAG}
# Then login to your DTR
docker login {DTR_ADDRESS}
## Routing Mesh
# Docker routing mesh is only available for Docker Entherprise Edition, in Universal Controll Plane
# To enable it go to Admin section and chose ports that will be used.
# After it ucp-interlocks network will be created
# Then when create service on network configuration step specify in-container port
# and add hostname based route, when requested hostname will match specified hostname
# request will be processed with a service.
# !IMPORTANT! Then add a service to ucp-interlocks network
# To check if it works you can fill any hostnames in hostname based routes
# and then specify swarm load balancer ip, or any swarm node ip for both these hostnames.
## Docker Networking.
# To list networks use
docker network ls
# To get full information about network use
docker network inspect <network_name>
# To list available network drivers find corresponding section in
docker system info
## Bridge Network
# Bridge network is a local network, it means that it can be accessble only from the host where
# there were created.
# Create a bridge network.
docker network create -d bridge --subnet 10.0.0.1/24 my-bridge
# You can view linux bridges with
apt install bridge-utils
brctl show
# This will be the output
# bridge name bridge id STP enabled interfaces
# br-59a80ec5b59f 8000.02428d9feba2 no # created bridge
# docker0 8000.024244bba584 no # default docker0 bridge (created on docke installation).
# Bridges are isolated so containers on one can not talk with containers on another.
# If you now run two containers on that bridge network
docker run -dt --name c1 --network my-bridge alpine sleep 1d
docker run -dt --name c2 --network my-bridge alpine sleep 1d
# brctl show will show such picture after it
bridge name bridge id STP enabled interfaces
# br-59a80ec5b59f 8000.02428d9feba2 no veth1e92c77 (2 containers are connected to br-59a80ec5b59f - my-bridge)
# veth5a46d50
# docker0 8000.024244bba584 no
# If then log in into the c1 container you can ping c2.
# You can even ping c2 by name - c2, because of docker built in dns server
# when the container with some name is added then it adds to the DNS server
# To give a user access to some container on some network from the outside world map ports with -p flag
docker run -d --name web1 --network my-bridge -p 5000:8080 nigelpoulton/pluralsight-docker-ci
# After it the record in iptable will apper
iptables -L -t nat
# ...
# Chain DOCKER (2 references)
# target prot opt source destination
# RETURN all -- anywhere anywhere
# RETURN all -- anywhere anywhere
# DNAT tcp -- anywhere anywhere tcp dpt:5000 to:10.0.0.4:8080 <--- HERE!!!
#
## Overlay Network
# First of all make sure that on each node the swarm has such open ports:
# TCP port 2377 for cluster managment communication.
# TCP and UDP port 7946 for communication among nodes.
# UDP port 4789 for overlay network traffic.
# First create a swarm, then create overlay network
# Then attach services to it network and you will be able to ping them with
# IPs that was given them in that network, you can check it's IP's with
docker network inspect <network name>
## MACVLAN
# MACVLAN is a Linux specific driver
# This is about connecting containers to existing networks
# MACVLAN in Linux gets every container IP and MAC address
# On Windows we have l2bridge and it shares common MAC address between containers
# but every containet gets it's own IP
# To use MACVLAN network cards should be set in permiscious mode, all the cloud providers deny it.
## IPVLAN
# Using to connect containers into existing network
# Gives each container or subinterface it's own IP
# but they all shares common MAC address (MAC of the parent interface) like Windows l2bridge
# First create docker network that will be a part of your existing network
docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
--ip-range=192.168.1.0/28 \
-o ipvlan_mode=l2 \
-o parent=eth0 \
existing-network
# subnet - subnet that this network will be attached to
# gateway - subnets gateway address (router for local wi-fi networks)
# ip-range - ip-range for containers
# -o ipvlan_mode=l2 - Layer 2 vlan
# -o parent eth0 - parent interface that current networks is sitting on
# ps-ip - network name
# When now running containers in that network theit will be accessible from the hosts from the same network
# not working on AWS in time of writing (16-Mat-2018).
## Service discovery
# When container is creating docker automatically configure name resolution for it to be able to be found by name
# Every container has a small DNS resolver in 127.0.0.11:53, it intercepts all DNS requests from the container and
# forwards them to a Docker host DNS resolver. Docker DNS resolver tries to resolve the name query and if can not
# them it sends it to the outer world. Resolution is network scoped, it means that names will be resolved
# only inside the network (containers should be on a same network).
# Every service gets Virtual IP or VIP, when resolver resolves some name to it's VIP then load balancer
# decides to which service replica will process the request.
# When creating a service, the task (container) on the host is also attached to the ingress network
# it is done to supply routing and load balancing, that's why you can see containers from service in ingress network.
docker network inspect ingress
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment