Skip to content

Instantly share code, notes, and snippets.

@levinotik
Created November 26, 2014 21:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save levinotik/be432e46be727551348c to your computer and use it in GitHub Desktop.
Save levinotik/be432e46be727551348c to your computer and use it in GitHub Desktop.

I dont' really have strong opinions (at the moment) regarding which config management system is best. For now, I'm managing a relatively simple topology/infrastructure or whatever the hell you want to call it.

We're running everything on AWS. We have multiple services, each of which is just a jar file run as a service on an EC2 instance. We have prod and stage environments, wherein the same services run but backed by different dependencies, i.e. stage services point to a stage RDS and redis instance, etc. So, for the most part, deploying changes to production means simply getting the new artifact (jar) onto a box somewhere. This is simple, I know. I'm just trying to figure out a solution that makes sense.

Here's the approaches I've tried so far:

packer + terraform - bake a new AMI, supply the id to terraform and replace existing instances with new ones.

ansible - use provisioning callbacks to have newly launched instances "phone home" and get provisioned by ansible. Existing instances can be updated by running plays directly.

I know my needs at this point are relatively straightforward: just a bunch of instances running a single service and that's it. Provisioning really is quite simple and even bash scripts are manageable for most of this.

Both approaches seem to work but are not without their cons. the packer + terraform combo requires baking a new image everytime the only thing that changes is that there's a new jar available. This is pretty slow (as compared to updating in place) when we need to deploy a quick fix to prod. Also, terraform works by keeping track of state after every run which causes some hiccups with regards to CI. It just doens't feel like terraform is meant to be used to constantly changing machines; its meant more as an abstraction on top of cloud formation and should be used relatively infrequently.

To be fair, I haven't used ansible a ton so take with grain of salt, but it feels to me like ansible has a solution for everything, but everything you can do feels very ad-hoc...like there's no similarity between one thing and another. There's an AWS module and a this and that module and they're all highly specialized. Reading docs is confusing too because ansible wants to help me provision machines, but it also says it can help me build my infrastructure, etc. Just looks at this http://docs.ansible.com/guide_aws.html there's approach 1, 2, 3, 4. ansible is trying to make everyone happy, but that flexibility is frustrating to me because it adds more confusion to docs.

Putting aside all this, real question is: what is the simplest, most reasonable way to provision simple instances and then update my cloud with this new stuff? I want something reliable and simple without gluing together a million things. Puppet looks compelling in many ways, but it seems to have a steep learning curve (worth it?)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment