There are many approaches to implementing a reuse of a common preset for Docker services for multiple environments, such as production and local environments.
This makes it possible to ensure the highest consistency for different environments with the same code-base. Implementing reuse of docker compose options also makes it easier to manage them.
I found on github a project called serversideup/spin. They took an approach using a feature called Docker overrides, to change some properties of common services for different environments.
After reading through their documentation, I realized that there are a few real-life cases where this project can not implement (or is difficult to archive).
That's why I decided to talk to the project owner and create this gists, for testing and discussion purposes.
If you would like supporting my projects, buy me a coffee 😉.
I really appreciate your love and supports.
Let's assume that we need to build the following environments with the same code-base using the popular framework Laravel. The general requirement is that they should be as consistent as possible, and easy to operate.
- Use a redis instance as a cache service
- Limit size of log files for all containers
- Automatically restart the container when it fails
- Simply launch docker for different environments
- Friendly with docker compose CLI
- One web backend can scale to many instances
- Use traefik as gateway, support HTTP and HTTPS
- No need for other services like mysql, phpmyadmin attached
- Two instances for web backend to test 2 different branches of code
- The first web backend runs on port 8001
- Second web backend running on port 8002
- Use additional mysql, phpmyadmin services for local development
- phpmyadmin running on port 8080
- No need to use traefik as gateway as each backend runs on its own port
- A web backend instance, running on port 80
- A blackfire service for profiling web backend
- Use additional mysql, phpmyadmin services for local development
- phpmyadmin running on port 8080
- No need to use traefik as gateway as each backend runs on its own port
This setup should follow the official Docker guidlines as much as possible.
I was able to easily implement the above requirements in a simple way following the following concept structure.
./
┝━ services.yml
┝━ docker-compose.prod.yml
┝━ docker-compose.local.yml
┝━ docker-compose.debug.yml
┝━ webroot/
┝━ dev-1/
┝━ dev-2/
┝━ debug/
└─ dcom.sh
-
services.yml
contains definitions for all services -
docker-compose.prod.yml
contains services forproduction
-
docker-compose.local.yml
contains services forlocal
-
docker-compose.debug.yml
contains services fordebug
-
webroot/
contains source code forproduction
-
dev-1/
anddev-2/
contain contain source code forlocal
-
debug/
contains source code fordebug
-
dcom.sh
is a wrapper script for docker compose
As a result, you can easily run a single command to start all the necessary services for each predefined environment with docker compose.
For example, to run services for the production
environment:
docker-compose -f docker-compose.prod.yml up -d
Very simple, right?
You don't need any other shell script, nor do you need to remember complicated command syntax to run it.
You might be wondering: "What is the shell script dcom.sh
for?", am I right?
This is a wrapper to shorten your command line. dcom
is short for "docker-compose".
We use it like this:
./dcom.sh env_name [arguments]
For example, to run services for the production
environment:
./dcom.sh prod up -d
The parameter for dcom.sh
is fully compatible with docker-compose
interface.
./dcom.sh prod up -d
Q: I am running this on
production
environment. As mentioned on the Use case, I want to run the service backend in 2 different instances onproduction
, how can I do that?
A: It's very simple, you just need to run the command below, traefik will act as a load balancer for those instances intelligently.
./dcom.sh prod up -d --scale backend=2
Then check all running services with this:
./dcom.sh prod ps
Ref: Docker Compose CLI reference
If you like this project, please support my works 😉.
From Vietnam 🇻🇳 with love.
Thanks @shinsenter! I greatly appreciate your compliments and honesty. I'm also thankful that you can share constructive feedback respectively (great way to learn!).
Regarding your comment:
Can you explain more? We've been running the "spin" set up for over a year in production and I have yet to run into any limitations. This includes running apps that have a stack of:
(all in one app and across 10 developer machines, a CI environment, a staging environment, and a production environment)
So far our experience has been great, but I want to make sure I understand your perspective too.
The biggest reason why we went with Swarm because we're able to run deployments with zero-downtime. There's a lot of health check stuff built-in that we benefit from as well.