Skip to content

Instantly share code, notes, and snippets.

@daniel-barlow
Created January 4, 2016 12:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save daniel-barlow/c2513b27676389ed54c2 to your computer and use it in GitHub Desktop.
Save daniel-barlow/c2513b27676389ed54c2 to your computer and use it in GitHub Desktop.

What is this about?

Convert your ruby app to run on JRuby (often fairly painless) then deploy it by generating a JAR file (should be reasonably painless) and copying it to the servers you're running on (should be bordering on trivial).

If you've never done Java programming, a JAR is a "Java ARchive" file which is basically a single ZIP file containing all the stuff your app needs to run. So you can install java on a new machine and copy the JAR file onto it, and you're ready to go.

What's wrong with how we do it now?

The usual way of deploying a Ruby app is to write (or copypasta) some Capistrano recipes to have the production machines check it out from a git repo then build it in exactly the same way you do on your Mac. This has downsides:

  • it means you need to install a full development environment on the production instances (e.g. a javascript interpreter for asset compilation).

  • it takes a while and needs a lot of CPU/disk/RAM, impacting any customer who's browsing the site while the deployment is ongoing (chopin servers are sized more for their ability to withstand deploying chopin than for their ability to serve customer & backoffice requests).

  • it eliminates (pretty much) the possibility of using automatic scaling to respond to demand spikes or replace failed instances, because someone or something has to run Capistrano on the newly spawned instances before they are actually useful.

  • rollback is complicated. For Chopin it is sufficiently complicated that as far as I know nobody has ever dared try it.

What does an artefact-based deploy look like?

Well, it might go something like this

# find out where we're deploying to.  This assumes we're using cumulus
$ DEPLOY_HOSTS=`make -s -C cloudformation  STAGE=staging print-outputs | jq -r '.Outputs.InstanceIp'`

# build a JAR file that contains everything we need
$ bundle exec rake jar

# copy it to the instances
$ for h in $DEPLOY_HOSTS; do scp example.jar sbopr@$h: ;done

# run it (handwaving slightly here)
$ for h in $DEPLOY_HOSTS; do ssh sbopr@$h 'kill `cat example.pid` && java -jar example.jar' ; done

I'm handwaving because this doesn't account for zero-downtime deploys. I'll get to that later. Obviously you could (and should) add a rake task to do the deploy.

How does that work with autoscaling?

It doesn't directly, but it opens the way:

  • You could put the JAR file somewhere that new instances can download it (e.g. an S3 bucket) and then script the download at instance creation time

  • You could use packer or something like it to make a new AMI with the JAR file baked in

OK, how do you actually implement this?

I did this for "tocsin", which is the testbed app we're using for experimenting with various Project Bell/Twilio stuff. You can see my pull request at

https://github.com/simplybusiness/tocsin/pull/32

JRuby

I found it fairly painless to switch my app to JRuby. You can probably get it using rvm/rbenv/chruby, but make sure you have 1.7.23 or better, because there's a problem with SSL certificate handling in older versions.

Set the environment variable JRUBY_OPTS=--2.0 otherwise it will pretend to be 1.9 not 2.x

Your project may require gems that are incompatible with JRuby: the only incompatible gem I found in my app was pry-plus, which pulls in a debugging gem that depends on MRI internals. I switched to regular pry.

Making a JAR file

There's already a tool which does most of the heavy lifting: from the description, "Warbler chirpily constructs .war files of your Ruby applications". https://github.com/jruby/warbler

It attempts to do a bit more automagical configuration than I'm really happy with, though, so here's how to make it a bit less chirpy

  1. warbler will build a WAR file instead of a JAR if it finds a config.ru file. This would make sense if we were deploying to some Enterprise Java monolith, but we're not, so we can't use this. If you have this file, get rid of it or move it into a subdirectory.

https://github.com/simplybusiness/tocsin/pull/32/files#diff-0a400c8217d53ff4f978163d7c61868cL1

  1. to have better control of what does or doesn't get included in the JAR, add a gemspec to your project (and remove gems from your Gemfile). This is probably a good idea anyway.

https://github.com/simplybusiness/tocsin/pull/32/files#diff-5ab32043520cd356bd010364b4b43879R1

https://github.com/simplybusiness/tocsin/pull/32/files#diff-8b7db4d5cc4b8f6dc8feb7030baa2478R1

  1. warbler will look for a file in bin/ to run when you execute the jar. You probably want to create one that will start your web application

https://github.com/simplybusiness/tocsin/pull/32/files#diff-a1e03f5c04a0c9a9d5f4aaf16d9077a0R1

  1. add a rake task to build all your stuff that needs building and then runs warbler.

https://github.com/simplybusiness/tocsin/pull/32/files#diff-52c976fc38ed2b4e3b1192f8a8e24cffR1

My app build is driven by npm, yours may differ: this is where you do any kind of asset precompilation or whatever.

  1. (this needs a bit more of a deep dive) The files in your project (assets, templates etc) are now not really files in the filesystem but resources in the JAR file. If we had accepted warbler's decision to produce a WAR file, it would add magic code to unpack everything somewhere in a temporary directory, but because we didn't, it won't. So we need some code to read from the jar

https://github.com/simplybusiness/tocsin/pull/32/files#diff-e472105952c4d3db0f0a7bfcf709373eR1

then we update our handlers to use it. e.g.

https://github.com/simplybusiness/tocsin/pull/32/files#diff-2f6bc2b006cc8425e405aca02c4dc3f8L110

For assets in public/ you could even do this with a Rack middleware

https://github.com/simplybusiness/tocsin/pull/32/files#diff-cc95738088603531796e0d0f246a5d77R44

That's all! Now you can run e.g.

$ bundle exec rake jar # build it $ java -jar tocsin.jar # run it

WIP/Sneak Preview: extending this to Zero-Downtime deployment

This is notably not zero-downtime: if there is an existing app process running on some server we are deploying to, we have to kill it before we can start the new one, and it takes a non-zero time for the JVM to start

Having said that, in-place zero-downtime deploys are often confusing, complicated, and error-prone (also #770, #776) and IMO really not worth it if you have a load-balancer which you can leverage for rolling restarts.

So, how could we do that?

  • Run the app on a random port instead of competing for 4567, meaning we can run old and new versions at the same time

  • When the new app is ready to serve requests, have it register itself with the load balancer

  • Have some cleanup process which looks for backends that are running old versions and kills them when they have finished all their requests

Supposing the existence of a load balancer which allows this kind of thing (unfortunately, right now, that's a big supposition), the app changes to do this are minimal.

  • First, we tell puma to run on port 0. This is a special value which Unix treats as "please pick an unused port for me".

  • Then we pass a block to puma.run, which is executed by puma just before it begins serving requests. In this block we get the actual port number our socket was assigned and can use it to register ourselves with a load balancer. How to do this is an open question. In this example I just write the endpoint into a file which I assume is being watched by some other process running on the local machine, but you could use the Stingray API, or you could do an AWS API call to do ELB things, or you could write a key to an etcd or consul cluster which a load baancer would be listening to, or you could do something incredible[*] with custom VCL code in Varnish.

https://gist.github.com/telent/ef0bdee073d8f4e4dab5

[*] Seriously, VCL is incredible in oh so many ways, read into that what you wish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment