#Drupal meetup 27 Oct 2016 Drupal Meetup: Scaling Drupal with Docker
##Presentation summary Speaker: Djun Kim, Camp Pacific (used to be Dare)
Some gists for tonight’s talk
- GET Docker.app
- Monolithic container
- Docker compose app
- Sample Docker-compose.yml file
- Slides included at bottom
Docker promises us the ability to develop our Drupal sites on our local, and deploy to identical environments in staging and production.
An ideal setup would let us configure a new local environment in a matter of minutes, and automate testing and deployment.
Architecting applications using containers should enable us to deploy sites which can automatically scale to handle high peak traffic.
Djun will discuss about his experiences trying to realize these dreams, starting from the simplest examples.
##Usage demo
- Installing Docker
- Gist presentation files?
- Note: To make command line tools available, you have to add to the PATH variable.
Approach one: find an image that has apache, php, mysql, memcache, Drupal pre-installed. Ready to run. There are a variety of these that are readily available.
$> docker pull drupal
# wait while it builds
$> docker images
# shows us what docker images are locally available
$> docker images | grep drupal
# filter!
$> docker run --name [some name] -p 8080:80 -d drupal
# on our host, we want port 8080 mapped to the container port 80, and run the image drupal. Returns the hash of the container.
$> docker ps
# shows a list of the running containers
(After the run
command, you can open the site at localhost:8080)
Docker works in layers. You can pull these layers individually, or in one that has them all pre-available.
Example goes to the localhost:8080, and runs drupal installation process. Now we want to take the fully realized db, and export it. The following command creates a new image called 'example_d8'
$> docker commit [some name] example_d8
$> docker images
# should see new image called 'example_d8'
Now we'll convert our image into a tarball, for dissemination to our hosting provider.
$> docker save example_d8 > example_d8.tar
$> gzip example_d8.tar
Assume we take steps necessary to copy the gzipped tarball up to Digital Ocean (or some other Cloud docker service proider.)
Ssh into remote cloud provider, and unzip and load the image:
$> gzip -d example_d8.tar.gz
$> docker load example_d8.tar
$> docker images
# should show our new image
$> docker ps
# if we need to bring down a previous image...
$> docker kill my-great-site
# kills already running docker image
$> docker run --name my-great-site -p 80:80 -d example_d8
Note: mac local docker environments have problems. It currently seems to work much better with the linux local environment, to the point that the speaker uses a virtualized linux instance to run his docker images.
##Pros and cons
If you want to do significant customization, like adding code, you must go into the docker container, and build a new one within it - there is no bridging. It can also have too many services in one container.
##Multi-container apps Idea: split out the functinality of the app into well defined services, and define how they interact. For example, web-server/php, database server, application cache. Web server and php need to share a fs.
It is possible to do that with just plain Docker, by linking, using shared volumes, but it's not convenient.
We do get docker-compose
, which comes with Docker. Example of this:
# http://docker4drupal.org
$> git clone
See the docker-compose.yml file. Look at the services section. We pass in environment variables, and optionally run some shell scripts. Note that this approach is not very secure.
# the line:
/docker-runtime/mariadb:/var/lib/mysql
# means the docker-runtime/mariadb volume is mounted and shared at /var/lib/mysql
In some subsequent sections of the config file, not the volumes_from
parameter (under nginx). It delineats volumes used in other sections.
Now we see a series of docker-compose commands, analagous to the docker commands from earlier:
$> docker-compose ps
$> docker-compose down #stops existing
$> docker-compose up -d #reads .yml config file, and spools up services.
Where does Drupal get its credentials from?
Note: Kinematic GUI now supports private docker containers.
- Alpine: an extremely pared-down linux distribution designed for building containers.
- A docker image is like a template. The running instances are 'containers'.
##A more realistic deployment We've already split docker into multiple service containers.
To get an application that scales, I want a web-head, talks to a db, think of as a cluster, so on, so forth. What I'd really like is the one piece not provided by amazon is a container that I can throw up into Amazon, that says "here's an instance, spin up as many as needed". "Here's my cluster, if an instance load exceeds 80% for 10 minutes, spin up another cluster". This is the dream. Apparently it is quite simple to get from where we've been to that.
We think of our containers as convenient bits of functionality that have clean and clear apis.
How do you deploy updgrades in an environment like this? Mechanism described requires downtime.
[it's very late, and I'm fading. I'll have to revisit this later]
##Other