Skip to content

Instantly share code, notes, and snippets.

@shivanshthapliyal
Last active November 23, 2022 21:45
Show Gist options
  • Save shivanshthapliyal/1abf664fbd39d36cd2c6115ea3f44f4c to your computer and use it in GitHub Desktop.
Save shivanshthapliyal/1abf664fbd39d36cd2c6115ea3f44f4c to your computer and use it in GitHub Desktop.
Docker Essentials

Docker Essentials

👤 Shivansh Thapliyal
⭐ Star this Gist

(Photo by Dominik Lückmann)

Contents


Docker Installation

Amazon Linux 2

  • Update the installed packages and package cache on your instance.

    sudo yum update -y
    
  • Install the most recent Docker Community Edition package.

    sudo amazon-linux-extras install docker
    
  • Start the Docker service.

    sudo service docker start
    
  • Add the ec2-user to the docker group so you can execute Docker commands without using sudo.

    sudo usermod -a -G docker ec2-user
    
  • Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions.

  • Verify that the ec2-user can run Docker commands without sudo.

    docker info
    

Ubuntu Using Default Repositories

  • Step 1: Update Software Repositories. It’s a good idea to update the local database of software to make sure you’ve got access to the latest revisions.

    sudo apt-get update
    
  • Step 2: Uninstall Old Versions of Docker

    sudo apt-get remove docker docker-engine docker.io
    
  • Step 3: Install Docker

    sudo apt install docker.io
    
  • Step 4: Start and Automate Docker : The Docker service needs to be setup to run at startup. To do so, type in each command followed by enter:

    sudo systemctl start docker
    sudo systemctl enable docker
    
  • Step 5 (Optional): Check Docker Version

    docker --version
    

Ubuntu Using Official Repository

  • Step 1: Update Local Database

    sudo apt-get update
    
  • Step 2: Download Dependencies

    sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
    
          - apt-transport-https: Allows the package manager to transfer files and data over https
          - ca-certificates: Allows the system (and web browser) to check security certificates
          - curl: This is a tool for transferring data
          - software-properties-common: Adds scripts for managing software
    
  • Step 3: Add Docker’s GPG Key

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
    
  • Step 4: Install the Docker Repository

    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"
    
  • Step 5: Update Repositories

    sudo apt-get update
    
  • Step 6: Install Latest Version of Docker

    sudo apt-get install docker-ce
    
  • Step 7 (Optional): Install Specific Version of Docker List the available versions of Docker by entering the following in a terminal window:

    apt-cache madison docker-ce
    
    The system should return a list of available versions as in the image above.
    
    At this point, type the command:
    
    sudo apt-get install docker-ce=<VERSION>
    

Docker Compose

Installation

Run this command to download the current stable release of Docker Compose:

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

To install a different version of Compose, substitute 1.29.2 with the version of Compose you want to use.

Apply executable permissions to the binary:

sudo chmod +x /usr/local/bin/docker-compose

Add to /usr/bin/ if necessary:

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Docker Commands

Containers

Running containers

In foreground mode
docker container run -it -p 80:80 nginx

In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error. It can even pretend to be a TTY

In detached mode
docker container run -d -p 80:80 nginx

INFO: By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the daemon exits, whichever happens first.

Docker run command

  • Docker runs processes in isolated containers.
  • When an operator executes docker run, the container process that runs, is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.
  • When we ran run command:
    • It looked for image called nginx in image cache
    • If not found in cache, it looks to the default image repo on Dockerhub
    • Pulls it down (latest version), stores in the image cache
    • Starts it in a new container

More info on docker docs

Naming Containers

docker container run -d -p 80:80 --name nginx-container nginx

List running containers

docker container ls

OR

docker ps

List all containers (Even if not running)

docker container ls -a

Stop container

docker container stop [ID]

Stop all running containers

docker stop $(docker ps -aq)

Remove container (Can not remove running containers, must stop first)

docker container rm [ID]

To remove a running container use force(-f)

docker container rm -f [ID]

Remove multiple containers

docker container rm [ID] [ID] [ID]

Remove all containers

docker rm $(docker ps -aq)

Get logs (Use name or ID)

docker container logs [NAME]

List processes running in container

docker container top [NAME]

Images

List pulled/created images

docker image ls

Pull images by name

docker pull [IMAGE_NAME]

Remove image

docker image rm [IMAGE_NAME]

Remove all images

docker rmi $(docker images -a -q)

Sample container creation commands

Nginx
docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default)
Apache
docker container run -d -p 8080:80 --name apache httpd
MongoDB
docker container run -d -p 27017:27017 --name mongo mongo
MySQL
docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql
Hue

The Hue Editor is a mature open source SQL Assistant for querying any Databases & Data Warehouses.

docker run -it -p 8888:8888 gethue/hue:latest

Networking

INFO Networking

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  • bridge: The default network driver.
  • host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly.
  • overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.
  • macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network.
  • none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.

More info on:

Published ports

Get port

docker container port [NAME]

List networks

docker network ls

Inspect network

docker network inspect [NETWORK_NAME]
("bridge" is default)

Create network

docker network create [NETWORK_NAME]

Create container on network

docker container run -d --name [NAME] --network [NETWORK_NAME] nginx

Connect existing container to network

docker network connect [NETWORK_NAME] [CONTAINER_NAME]

Disconnect container from network

docker network disconnect [NETWORK_NAME] [CONTAINER_NAME]

Detach network from container

docker network disconnect

Image tagging & pushing

Upload to dockerhub

docker image push username/image

Login to dockerhub

docker login

Login to ECR

aws ecr get-login-password --region <REGION_ID> | docker login --username AWS --password-stdin <AWS_ACC_NO>.dkr.ecr.<REGION_ID>.amazonaws.com

Upload to AWS ECR REPO

docker tag hadoop:latest <AWS_ACC_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPO_NAME>:hadoop-1.0.0

Add tag to image

docker tag hadoop:latest hadoop:hadoop-1.0.0

Using Amazon ECR as repository

Install the AWS CLI

Refer Installing, updating, and uninstalling the AWS CLI version 2.

Example to create docker image

Create a Docker image
touch Dockerfile
FROM ubuntu:18.04

# Install dependencies
RUN apt-get update && \
 apt-get -y install apache2

# Install apache and write hello world message
RUN echo 'Hello World!' > /var/www/html/index.html

# Configure apache
RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \
 echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \
 echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \ 
 echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \ 
 chmod 755 /root/run_apache.sh

EXPOSE 80

CMD /root/run_apache.sh
Build the Docker image
docker build -t hello-world .
Verify that the image was created correctly
docker images --filter reference=hello-world
Run the newly built image
docker run -t -i -p 80:80 hello-world

Authenticate to your default registry

To authenticate Docker to an Amazon ECR registry with get-login-password, run the aws ecr get-login-password command.

AWS CLI
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
AWS Tools for Windows PowerShell
(Get-ECRLoginCommand).Password | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com

Create a ECR repository

aws ecr create-repository \
    --repository-name hello-world \
    --image-scanning-configuration scanOnPush=true \
    --region us-east-1

Push an image to Amazon ECR

Tag the image to push to your repository.
docker tag hello-world:latest aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
Push the image.
docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest

Pull an image from Amazon ECR

# After docker login
docker pull aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest

Delete an image

aws ecr batch-delete-image \
      --repository-name hello-world \
      --image-ids imageTag=latest

Delete a repository

aws ecr delete-repository \
      --repository-name hello-world \
      --force

Refer AWS Docs for more.

Volumes

Volume - Makes special location outside of container UFS. Used for databases Bind Mount -Link container path to host path

Check volumes

docker volume ls

Cleanup unused volumes

docker volume prune

Pull down mysql image to test

docker pull mysql

Inspect and see volume

docker image inspect mysql

Run container

docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql

Inspect and see volume in container

docker container inspect mysql
TIP: Mounts
  • You will also see the volume under mounts
  • Container gets its own uniqe location on the host to store that data
  • Source: xxx is where it lives on the host

Check volumes

docker volume ls

There is no way to tell volumes apart for instance with 2 mysql containers, so we used named volumes

Named volumes (Add -v command)(the name here is mysql-db which could be anything)

docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql

Inspect new named volume

docker volume inspect mysql-db

Bind mounts

  • Can not use in Dockerfile, specified at run time (uses -v as well)
  • ... run -v /home/shivansh/path/:/path/container (Mac/Linux)
  • ... run -v //c/Users/user/stuff:/path/container (Windows)

Run and be able to edit index.html file (local dir should have the Dockerfile and the index.html)

docker container run  -p 80:80 -v $(pwd):/usr/share/nginx/html nginx

Go into the container and check

docker container exec -it nginx bash
cd /usr/share/nginx/html
ls -al

You could create a file in the container and it will exist on the host as well

touch test.txt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment