The idea of providing a dedicated, encapsulated development environment with is basically the same for all of your teams' members has been around for a while. For quite some time Vagrant seemed to be the state-of-the-art solution for such needs. In combination with puppet for example one could easily set up a fully working VM with all dependencies installed and set up. But Vagrant is kind of an monstrosity. There are so many things that could possibly go wrong while provisioning a Vagrant machine. Also, if you happen to have a lot of projects using Vagrant the VMs are probably using a lot of your precious SSDs disk space. The small containerized images build with Docker seemed to be an interesting alternative, but rather hard to maintain in the past.
Fear no more! docker-compose has you covered. But there are still some hurdles to jump. This article aims to be a rather comprehensive guide to set up a development environment using docker and docker-compose from the ground up and provide best-practises and solutions to common errors. It will cover the following steps:
Table of Contents generated with DocToc
- Installing VirtualBox
- Installing boot2docker
- Configuring boot2docker
- Installing docker-compose
- Preparing the containers
- Distributing your containers
- Using docker-compose
- Performance
You need to install VirtualBox first (if you're switching from Vagrant you might already have it installed :) )This is rather simple: it has an installer package which you can grab here.
As docker natively runs on Linux only, the official OS X solution depends on a tool dubbed boot2docker. It uses a small footprint Debian image that runs in a VM and hosts the docker daemon. It also provides some vm helper commands as well as the docker-client which is installed on your system. The installation is also very easy as the docker folks have also created a nice OS X installer package. This page besides providing installations instructions briefly covers some key concepts of docker which I recommend to read as well if you have no experiences with docker yet. Then download and install the package.
Boot2docker comes with a command-line helper named boot2docker. It sports a lot of commands, so type boot2docker help
to see all of them (I just displayed the commands I need on a regular basis):
up|start|boot Start VM from any states.
ssh [ssh-command] Login to VM via SSH.
down|stop|halt Gracefully shutdown the VM.
restart Gracefully reboot the VM.
ip Display the IP address of the VM's Host-only network.
status Display current state of VM.
upgrade Upgrade the Boot2Docker ISO image (restart if running).
To initialize, type boot2docker init
. This initalizes a new boot2docker vm (maybe the installer already did that for you). boot2docker up
starts the VM for the first time. If everything went smoothly you'll get a console output looking similar to this:
Started.
...
To connect the Docker client to the Docker daemon, please set:
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/docker/.boot2docker/certs/boot2docker-vm
That is essential! You can safely assume that the IP of this machine won't change (and if it does, you can get it with boot2docker ip
). I set these variables in my .bashrc (resp. .zshrc) so I don't have to set them again after rebooting. Export the variables or set them in your shell-rc file and source it to apply the settings. Then type: docker ps
. If you see only this line
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
everything is working! Go to the next step. If you get some weird errors about docker not finding the docker daemon try the following things (in that order):
- check if the boot2docker vm really is running (duh).
boot2docker status
should returnrunning
- check if the ip address you exported in your shell-rc file is the same as the one returned by
boot2docker ip
- the re-source your shell-rc or just open a new tab
- If you're getting weird SSL errors, check your python and openssl installation. Maybe reinstall both of them. This maybe of help for you as well.
- in some cases docker worries about the hostname. You can resolve this issue by putting the following line into your /etc/hosts and then replace the IP in your shell-rc with the boot2docker hostname (
export DOCKER_HOST=tcp://boot2docker:2376
).
etc/hosts
...
192.168.59.103 boot2docker
...
Replace 192.168.59.103
with the IP returned by boot2docker ip
.
If - at some point - you need to shut down the boot2docker VM you can do this by boot2docker down
docker-compose (previously named fig) is a tool that helps you create and maintain your docker dev-environments. It manages all the docker commands for you like linking containers or mounting volumes (much like puppet does for Vagrant but much simpler). If you have python and pip set up on your system you can install docker-compose like this:
pip install docker-compose
or
sudo pip install -U docker-compose
depending on your current pip preferences.
If that doesn't work for you take a look at the installation instructions provided by the docker team.
It does not any additional setup (except the docker-compose.yml file, which we'll cover later).
(If you already have boot2docker-ready™, preconfigured containers at your disposal you can confidently skip this step and run them with docker-compose. You're going to find a bunch of them here).
This actually was the hardest part for me to come by. We need docker images that create containers that work well in the boot2docker environment.
What did you just say? They're docker containers so they should work everywhere, right?
Kinda.
The boot2docker environment is a very special place. Let's talk about mounted volumes. It is possible to mount local volumes (meaning directories on your Mac) into a docker container. But there is a lot of magic going on behind the scenes to make this seemingly simple task to work.
You have to remember that the docker daemon is running on a VM inside VirtualBox. So if you'd mount a volume using the -v docker command you're mounting a volume from the VM into the container. The official guide states:
Note: If you are using Boot2Docker, your Docker daemon only has limited access to your OSX/Windows filesystem. Boot2Docker tries to auto-share your /Users (OSX) or C:\Users (Windows) directory - and so you can mount files or directories using docker run -v /Users/:/ ... (OSX) or docker run -v /c/Users/:/<container path ... (Windows). All other paths come from the Boot2Docker virtual machine's filesystem.
So as long as you're mounting folders that are inside your /Users directory you should be good to go. But with all that mounting magic we're often running into permission problems when trying to persist data across multiple container runs.
In order for your service to run correctly, the directories and files the processes are accessing need to have the correct permissions. Since we're mounting the file-system on the VirtualBox machine, we need to take a look at the VBoxes users. The /Users directory of the host machine is mounted into the /Users directory of the VM. The owning user here is user named docker and has the uid 1000. To clarify how that helps us configuring the container images let's go through an example.
To prepare the MySQL container for use with boot2docker I forked the Dockerfile used in the tutum MySQL image. If you happen to have no experience in creating a Dockerfile I highly recommend going through the offical guide written by the Docker folks.
The container pretty much worked fine from the beginning except there were problems with data persistence due to the problematic mounted volumes. MySQL needs to have write access to the (default) /var/lib/mysql
dir. I looked into the container and it seemed that the directory was mounted while being owned by the user with the id 1000 (the docker user inside the vm). To give the mysql process (which runs under the user mysql) write access to this dir I had to change the userid of the mysql user to 1000:
usermod -u 1000 mysql
Also, later MySQL complains about not being able to create a lock file (that most likely contains the pid of the process). I solved this via:
chown -R mysql:root /var/run/mysqld/
I added these two lines to the Dockerfile I forked and everything went smoothly from the time the containers were created.
The following task is not particularly boot2docker related but rather is helpful when running some php container using php-fpm while using an external database container.
Accessing Docker environment variables in PHP
When linking containers, or when specifically defining them, Docker creates environment variables inside the created container. This is useful to get the IP, the port and some other important information about linked containers (e.g. databases).
Unfortunately, when using php-fpm these environment variables are not automatically propagated to the php process. To make this even harder, the only option to make php aware of the env-vars seems to be to explicitly define them inside the fpm/pool.d/www.conf
file.
Luckily, John Dorean wrote a script to solve this problem. I incorporated his script into the run.sh, that starts the php-fpm process and the webserver. I basically searches for Docker environment variables for linked containers (that almost certainly contain the expression _PORT_
) and writes them into the www.conf
file. Then we're able to use the variables inside our php-scripts like that:
$_SERVER['CONTAINER_DB_1_PORT_3306_TCP_ADDR'] // evaluates to the IP address of the linked container
So if you want to use images inside the boot2docker environment, some extra problems have to be taken care of. I had to do similar stuff to the apache-php5 and nginx-php5 images.
When it comes to distribution of your self-made containers the docker-hub is the go-to-spot. You can push your containers to it via docker push
or set up an automatic build that gets your source files from a public repository. I won't go into this any deeper as container distribution is not part of this tutorial. You can get more information here or browse through my created containers here.
docker-compose is a handy tool to orchestrate the container creation, removal, starting, stopping and linking. All you need to have is a docker-compose.yml configuration file inside your project dir.
Let's dive right in and look at an example - the config file I use for PHP projects that need to use a MySQL db.
web:
image: crollalowis/php5 #the image we want to use when running the container
ports:
- "80:80" #maps port 80 inside the container to port 80 of the boot2docker vm
links:
- db #links to the container 'db' that defined at the bottom
volumes: #mounts some outside directories intot the container
- .:/var/www
- ./compose/config/nginx:/etc/nginx/sites-enabled:rw
- ./compose/logs/nginx:/var/log/nginx:rw
db:
image: crollalowis/mysql
volumes:
- ./compose/data/mysql:/var/lib/mysql
- ./sql:/root/sql-import
ports:
- "3306:3306"
environment:
MYSQL_PASS: 123
Almost all of the configuration variables map to a flag used in the docker run
command. In this case we specify two containers to run: a web container, that runs nginx and php5 (web
) and a container that runs the mysql process (db
). The names of the containers are arbitrary and you can define them as you like. If you know your way around docker run
the file is pretty much self-explanatory. There are some things I want to point out:
- I always mount the nginx/apache sites-enabled dir into some subfolder of my project dir, to always have the config files at hand
- Same with the webserver's logfiles
- I keep everything (temp-, persistent- and config-files) related to docker-compose in separate subfoldes of a dir called 'compose'
- I open the MySQL port 3306 to the boot2docker VM to access it later via a GUI or another client (like SequelPro)
- The MySQL password can be defined via an environment variable thanks to the work of tutum
If you use a apache or nginx webserver with vhosts you have to map the (dev-) domain from your configuration file to the boot2docker IP in your ```/etc/hosts/ (because the port 80 on the container is mapped to the port 80 on the boot2docker VM).
For example (/etc/hosts
)
mycoolapp.dev 192.168.59.103
To access the linked databases (or other services for that matter) environment variables are created by Docker automatically.
You only have to know the container name (which is specified in the docker-compose.yml) and the port the service runs on. The important variables are:
[CONTAINER_NAME]_1_PORT_[SERVICE_PORT]_TCP_ADDR
which resolves to the (internal) IP address of the service container inside the virtual docker network
e.g. DB_1_PORT_3306_TCP_ADDR
(see config file above)
[CONTAINER_NAME]_1_PORT_[SERVICE_PORT]_TCP_PORT
which resolves to the services port and is pretty redundant
You also get all the environment variables from the linked container in the prefixed with [CONTAINER_NAME]_1_ENV_
for example DB_1_ENV_MYSQL_PASS
Another - more simple - way to access the database IP
Docker also creates container-name mappings in the /etc/hosts of the container. So when configuring your database connection you could just put the container name into the host field (e.g. db
).
This is the part you have to read if you're done building and configuring or if you were given some code that is already equipped with a docker-compose.yml.
To start the containers defined in the docker-compose.yml just run
docker-compose up
This runs the defined tasks on the containers and shows their stdout output. I recommend using
docker-compose up -d
to run it in daemon mode (goes into background)
If you have edited your /etc/hosts
(see above) to reflect the webserver vhost configuration you should be able to go to the desired domain and test if your app is running.
To stop the containers run
docker-compose stop
,
to remove them
docker-compose rm
.
To view the status of the containers (running, stopped, etc.) run
docker-compse ps
You'll see some more information about the containers as well as their assigned names.
If you want to execute some commands in a running container (much like vagrant ssh) you can do it with
docker exec -it [container_name] bash
this opens a bash shell inside the running container.
The standard file mounting technique used by the boot2docker VM is kind of slow. So if you experience performance issues you could try to use an alternative image (e.g. this one). Warning: This is advanced stuff!