Skip to content

Instantly share code, notes, and snippets.

@kapilt
Last active September 11, 2016 16:27
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kapilt/fa4ec1e9f033a2f5539962670c01e710 to your computer and use it in GitHub Desktop.
Save kapilt/fa4ec1e9f033a2f5539962670c01e710 to your computer and use it in GitHub Desktop.
Docker Volumes Plugins

Docker Volumes

Examining opensource docker volumes for aws ebs support.

Key requirements in this case are simplicity, support for aws ebs volumes, with kms, snapshots, and use of instance roles for credentails.

Aka secure, encrypted, and with backups.

Ideally with some notion of zone awareness and distinguishing that on container move.

Flocker

Seems to be in flux, zfs out, legacy storage vendors in, core developers moved on.

doesn't support installation via a container, overly complex implementation (there's boat loads of code here) with a bunch of extraneous agents, doesn't support instance roles (ClusterHQ/flocker#2432).

Convoy

Seems to be the closest to what i'm looking for, simple, featureful (various volume config options, snapshots, etc). Lacks kms support, but thats relatively easy to add (done in rancher/convoy#154), more of concern is the extraneous api calls due to abstraction issues (rancher/convoy#155) and some open questions wrt to upstream maintenance and independent of rancher usage.

Rexray

For some reason advertises ebs support but doesn't currently implement it! From what i gather from stray issue comments, rexray was rewritten from 0.3 to 0.4 and 0.4 dropped ebs, but never bothered to update their docs.

rexray/rexray#539

JFDI

  • Did i miss any, please comment if there's something out there i should look at.

  • Else i'm kinda of left with the sinking feeling i need to write my own, plus deal with the scheduler, moby/swarmkit#1402 (sadly nomad doesn't really support docker plugins, without a bunch exec_raw and zone local services).

or just move on with life and use kubernetes (which brings it own complexity, albeit great ecosys).

@jmahowald
Copy link

I was having this discussion with someone recently. I don't have extremely high availability or consistency requirements, but do expect persistent storage to persist, and must have low in human terms MTTR (e.g. 1 minute). I told him that we got by as an industry for a long time with saying for persistent services we were tied to a "basic" file system. I'm curious if the below reasoning resonates with this relaxation of requirements.

What I ended up deciding was that the complexity of the above options tells me there are a lot of possible failure points and complexity, and that I could get by with shifting the logic to build and deploy time. With terraform, I decided to have a separate group of instances for storage that would be named "storage-", attach labels to the cluster (in my case rancher, but I think this would work with swarm as well). Persistent service would need to by convention, be tied to a particular "storage" label. All the other nodes in the cluster are part of an auto scaling group.

Then I was going to try taking a stab again at volume snapshots with custodian for persistence, and add in some [basic disk alarms(http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/mon-scripts.html)

I'm curious as to what your thoughts are, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment