Skip to content

Instantly share code, notes, and snippets.

@mcansky
Last active August 29, 2015 14:17
Show Gist options
  • Save mcansky/8043115d4c5748b032a4 to your computer and use it in GitHub Desktop.
Save mcansky/8043115d4c5748b032a4 to your computer and use it in GitHub Desktop.
my basic need of ruby app hosting with multiple copies of Docker containers

Docker - Hosting

For a long time deploying web applications involved one of the following :

  • package the app in a tar bar or an “evolved” tar ball (deb, rpm) and extract it on the hosts
  • use your favorite configuration management system and a remote command execution to deploy the latest tag of your branch

In both cases you can use some amount of automation with Salt/Chef/Puppet/Ansible for example.

Yet I am not happy with that process. The usual steps are as follow :

  1. push your code
  2. have your tests run
  3. package (optional)
  4. call all servers to pull last version
  5. unpack
  6. restart the web server to point it to the new codebase

Using things like capistrano instead of packages will give you instant rollback but you are still straining the hosts during deploy.

And if your app fail you need to rollback (link back the current “tag” to the old snapshot and restart the app server).

Docker in

With docker or a container things can be a little smoother :

  1. push your code
  2. have your tests run
  3. build a container
  4. spin up a couple of copies of that container
  5. send some connections to those containers
  6. finish deploy by running more copies of the new and removing the old ones

You don’t strain the current copies of the app and you get to test the waters before rolling out full steam. Well that’s my idea.

Containers make that concept easy to apprehend as you handle single units of your services like bricks. They are independent and can be plugged in and out.

If the deploy fail you keep the existing containers running and trash the new one, only the load balancer need to be aware of that.

Or so I thought.

The missing link

The big issues in all this are how to handle the pool of hosts to which you deploy and the roll in and out of the containers. I could not find a “clean” way to do what I envisioned. So I guess it explains why there is so many of those tools already and why all of them look incomplete and doing so much at the same time.

Pool of hosts

The way I see it here is how things should work :

  1. create pool
  2. add X hosts
  3. deploy Y copies of container A across hosts
  4. shutdown Z copies of containers B across hosts
  5. remove V hosts

That’s the basics :

  • create the pool
  • add / list / remove hosts
  • add / list / remove containers/services

From my point of view that’s already a lot and that’s a good thing to do for 1 single service. I thought AWS ECS or Docker Swarm would do that. I am still not sure about that.

Anything else above this (scaling of containers, deploys, …) should be handled elsewhere.

That mean you should have a way to listen to events such as errors or response time and have them bubble up somewhere else. And that it’s down to you to decide what you do then (add hosts, add copies of containers, remove …) .

The only “clever” thing I see in that service is to know where the containers should be created on hosts. That would be depending on the load of the hosts and “limits” or some more clever way.

So you would have :

  • a “host” : a physical / virtual machine that runs docker and able to run containers
  • a “logical” group of hosts : the “pool”
  • a container : a docker container, a service of any sort
  • a bookkeeper : to keep the tally of deploys, the services-containers-ports(TAG) (think Consul, Etcd)
  • a scheduler : to deploy containers in the pool in not too dumb manner. (not on that host that is already at load 15).

The other one

Then you obviously need a way to monitor the health of your pool, the containers (by service and by tag) and then trigger actions such as :

  • scale (up, down) a service
  • scale (up, down) the pool
  • deploy a service tag in X copies
  • retire a service tag

So ?

  • what is the current way to do something like this ?
  • is it possible ?
  • what's the closest way to do something like this ?

In short :

"How do I deploy X copies of app container Foo on a pool of Z servers ?"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment