Skip to content

Instantly share code, notes, and snippets.

Forked from wsargent/
Created November 19, 2013 17:49
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save bencevans/7549408 to your computer and use it in GitHub Desktop.

Docker Cheat Sheet


Why Should I Care (For Developers)

I don't care, I just want a dev environment


Use Homebrew.

ruby -e "$(curl -fsSL"


Install VirtualBox and Vagrant using Brew Cask:

brew tap phinze/homebrew-cask
brew install brew-cask
brew cask install virtualbox
brew cask install vagrant

We use the pre-built vagrant box:

mkdir mydockerbox
cd mydockerbox
vagrant init docker   
vagrant up
vagrant ssh

In the Vagrant:

sudo su -
sh -c "curl | apt-key add -"
sh -c "echo deb docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get install -y lxc-docker


docker run -i -t ubuntu /bin/bash

That's it, you have a running Docker container.


Your basic isolated Docker process. Think of it as a chroot on steroids.

Some common misconceptions it's worth correcting:

  • Containers are not transient. docker run doesn't do what you think.
  • Containers are not limited to running a single command or process. It's just encouraged.

If you want to interact with a container, docker ps -a to see the list, then docker start and docker attach to get in.



Import / Export


Images are just templates for docker containers.



Import / Export


Best to look at and the best practices for more details.


A repository is a hosted collection of tagged images that together create the file system for a container.


A registry is a host -- a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories. hosts its own index to a central registry which contains a large number of repositories.


The filesystem in Docker is based on layers. They're kind of like git commits or changesets for filesystems.


Docker volumes are free-floating filesystems. They don't have to be connected to a particular container.

You can mount them in several docker containers at once, using docker run -volume-from

See advanced volumes for more details.


Links are how Docker containers talk to each other. Linking into Redis is the only real example.

If you have a docker container with the name CONTAINER (specified by docker run -name CONTAINER) and in the Dockerfile, it has an exposed port:


Then if we create another container called LINKED like so:

docker run -d -link CONTAINER:ALIAS -name LINKED user/wordpress

Then the exposed ports and aliases of CONTAINER will show up in LINKED with the following environment variables:


And you can connect to it that way.

Building a dev environment

You want to build your own development environment from scratch.

I have my own example at, but here's the thinking that went into it.

Best practices

Build from source

The only sane way to put together a dev environment is to use raw Dockerfile and a private repository. Pull from the central docker registry only if you must, and keep everything local.

Chef recipes are slow

You might think to yourself, "self, I don't feel like reinventing the wheel. Let's just use chef recipes for everything."

The problem is that creating new containers is something that you'll do lots. Every time you create a container, seconds will count, and minutes will be totally unacceptable. It turns out that calling apt-get update is a great way to watch nothing happen for a while.

Use raw Dockerfile

The way that Docker deals with this is to use a versioned file system, which identifies commands it can run from cache and pulls out the appropriate version. You want to keep the cache happy. You want to put all the mutable stuff at the very end of the Dockerfile, so you can leverage cache as much as possible. Chef recipes are a black box to Docker.

The way this breaks down is:

  1. Cache wins.
  2. Chef, ansible, etc, does not use cache.
  3. Raw Dockerfile uses cache.
  4. Raw Dockerfile wins.

There's another way to leverage Docker, and that's to use an image that doesn't start off from ubuntu or basebox. You can use your own base image.

Check the private repository exists and is running on port 5000:

apt-get install -y curl
curl --get --verbose http://relateiq:5000/v1/_ping


Install a internal docker registry

Install an internal registry (the fast way) and run it as a daemon:

docker run -name internal_registry -d -p 5000:5000 samalba/docker-registry

Alias server to localhost:

echo "      internal_registry" >> /etc/hosts

Check internal_registry exists and is running on port 5000:

apt-get install -y curl
curl --get --verbose http://internal_registry:5000/v1/_ping

Install Shipyard

Shipyard is a web application that provides an easy to use interface for seeing what Docker is doing.

Open up a port in your Vagrantfile: :forwarded_port, :host => 8005, :guest => 8005

Install shipyard from the central index:

SHIPYARD=$(docker run \
    -name shipyard \
	-p 8005:8000 \
	-d \

You will also need to replace /etc/init/docker.conf with the following:

description "Docker daemon"

start on filesystem and started lxc-net
stop on runlevel [!2345]


        /usr/bin/docker -d -H tcp:// -H unix:///var/run/docker.sock
end script

THen reboot the VM.

Once the server has rebooted and you've waited for a bit, you should have shipyard up. The credentials are "shipyard/admin".

  • Go to http://localhost:8005/hosts/ to see Shipyard's hosts.
  • In the vagrant VM, ifconfig eth0 and look for "inet addr:" -- enter the IP address.

Create base image

  • Create a Dockerfile with initialization code such as `apt-get update / apt-get install' etc: this is your base.
  • Build your base image, then push it to the internal registry with docker build -t internal_registry:5000/base .

Build from your base image

Build all of your other Dockerfile pull from "base" instead of ubuntu.

Keep playing around until you have your images working.

Push your images

Push all of your images into the internal registry.

Save off your registry

if you need to blow away your Vagrant or set someone else up, it's much faster to do it with all the images still intact:

docker export internal_registry > internal_registry.tar
gzip internal_registry.tar
mv internal_registry.tar.gz /vagrant


  • docker add blows away the cache, don't use it (bug, possibly fixed).
  • There's a limit to the number of layers you can have, pack your apt-get onto a single line.
  • Keep common instructions at the top of the Dockerfile to leverage the cache as long as possible.
  • Use tags when building (Always pass the -t option to docker build).
  • Never map the public port in a Dockerfile.

Exposing Services

If you are running a bunch of services in Docker and want to expose them through Virtualbox to the host OS, you need to do something like this in your Vagrant:

(49000..49900).each do |port| :forwarded_port, :host => port, :guest => port

Let's start up Redis:

docker pull johncosta/redis
docker run -p 6379 -d johncosta/redis

Then find the port:

docker ps
docker port <redis_container_id> 6379

Then connect to the 49xxx port that Virtualbox exposes.


docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm


docker rm `docker ps -a -q`

Running from an existing volume

docker run -i -t -volumes-from 5ad9f1d9d6dc mytag /bin/bash


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment