Skip to content

Instantly share code, notes, and snippets.

@DaddyMoe
Last active December 21, 2020 13:36
Show Gist options
  • Save DaddyMoe/d5ef7e365bfa2efe64fd9e90c3c14fea to your computer and use it in GitHub Desktop.
Save DaddyMoe/d5ef7e365bfa2efe64fd9e90c3c14fea to your computer and use it in GitHub Desktop.
Daily Docker Commands

Daily Docker Commands

Building custom containers

// Build from a `Dockerfile` in this directory `.` and tag the container as `getting-started`
docker build -t getting-started .

//Run container in the background `-d` with ports `-p` and tag = `getting-started`
docker run -dp 3000:3000 getting-started

stop and remove container

docker rm -f <container_id>

See container logs

docker logs -f <container-id>

// See docker-compose services logs interwieved 
docker-compose logs -f

// See docker-compose logs of a specific app/service
docker-compose logs -f <service_name>

Exec (sh) into a container

// Go inside a container and run mysql tool
docker exec -it <container_name> sh

// Go inside a container and run mysql tool
docker exec -it <container_name> mysql -p <database_name>

// Go inside a container
docker exec -it <container_name> /bin/sh; exit

// Run command in container   
docker exec <container_name> cat /data.txt

Docker Compose

# First time
docker-compose up

# Subsequently
docker-compose start

# Graceful stop, this preserves the container's state such as network, volumes and modifications
docker-compose stop

# Tear down and destroy all all resources, except external volumes
docker-compose down

# Tear down and drop volume as the above does not
docker-compose down --volumes

# Start a docker-composed with a non default file name, by passing the `-f` or `--file`
docker-compose -f custom-compose-file.yml start

Volumes

// Create volume
docker volume create todo-db

// Inpect volume
docker volume inpsect todo-db

// Mount volume `todo-db` to docker container path `/etc/todos`
docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started    

Environments

Apply place holders like here

services:
  database: 
    image: "postgres:${POSTGRES_VERSION}"
    environment:
      DB: mypostgresdb
      USER: "${USER}"

Opt 1: Via OS before calling docker command:

export POSTGRES_VERSION=latest
export USER=postgres
docker-compose up

Opt: Via in the shell:

POSTGRES_VERSION=latest USER=postgress docker-compose up

Opt3: Via .env file at same directory as the docker-composed file

POSTGRES_VERSION=latest
USER=postgres

However, not the priority order for loading these:

  1. Compose file
  2. Shell environment variables
  3. Environment file
  4. Dockerfile
  5. Variable not defined

Networks

// Create network
docker network create my-network-todo-app

// Create container and attach to network above

docker run -d \
--network my-network-todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:5.7

Security

// Scan containe for vulnerability more: https://docs.docker.com/engine/scan/
// and https://docs.docker.com/docker-hub/vulnerability-scanning/
docker scan <container_tag_name>

Image layering

// See commands used to build image in each layer (base shown at the bottom)
docker image history <container_name>

Multi stage

For a java project

A JDK is needed to compile the source code to Java bytecode but, isn't needed in production. Similarly tools like Maven or Gradle are needed to help build the app but not needed in prod. Thefore, a Multi-stage builds helps you copy from stages what you need to a fresh image

FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package

FROM tomcat
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps

Here the first stage build is used to do the Java build using Maven. The second stage (starting at FROM tomcat), we copy in files from the build stage. NOTE: The final image is only the last stage being created, in this case the 'FROM tomcat' with out copied file

FROM node:12 AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build

FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html

Here, node:12 image is used to perform the build (maximizing layer caching). Then the output is copied into a nginx container

For more see the docker/getting-started

Misc

Resources

Docker file best practices Multi stage builds Best practices for building node apps

Secrets

Handling secrets

Why you shouldnt use env variables for secret data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment