Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Install Nginx+Unicorn on Red Hat OpenShift

Installing Nginx+Unicorn on Red Hat OpenShift

This set of scripts and config files will help you set up the awesome combination of Unicorn and Nginx as a server environment for Ruby web applications on Red Hat's OpenShift platform while I finish my cartridge.

Notes

  • Before you get started, you should read my post on how to set up Ruby 1.9 environment on OpenShift here: http://goo.gl/ufI5G This will (hopefully) get you started on building a Rails app on OpenShift the unofficial way (for now!).

Installation

Before you do anything, ssh into your application shell and set a very important environment variable:

$ declare -x RAILS_ROOT=~/appname/repo/appname

Substitute "appname" for your app's name of course.

You'll also want to make sure Unicorn is included in your Gemfile and installed on the OpenShift box. I advise making this edit locally and pushing your changes up to the server to make things line up nicely - after all you should be testing/developing in the same environment as the server to prevent crazy bugs. If you don't have a post-deploy script set up to run bundle install, then do that at the same time.

Now you can get the bits which will do a lot of the work for you.

$ cd $OPENSHIFT_DATA_DIR
$ git clone git://gist.github.com/2832578.git nginx-unicorn
$ chmod +x nginx-unicorn/install-nginx-unicorn.sh
$ nginx-unicorn/install-nginx-unicorn.sh

You should now be able to sit back and watch it download/build/install everything into the right places. Once it's done, edit your action hooks from within your local repo and add the following:

.openshift/action_hooks/start

# make sure we can use rvm and bundle - you should have set up a gemset for your app already!    
source $OPENSHIFT_DATA_DIR/.rvm/scripts/rvm
rvm use 1.9.3-p125@appname

# start the nginx http server
$OPENSHIFT_DATA_DIR/nginx/sbin/nginx

# start the unicorn backend server
bundle exec unicorn_rails -c ~/appname/repo/appname/config/unicorn.rb -

.openshift/action_hooks/stop

# kill nginx+unicorn through their pidfile
kill `cat $OPENSHIFT_REPO_DIR/appname/tmp/pids/unicorn.pid`
kill `cat $OPENSHIFT_REPO_DIR/appname/tmp/pids/nginx.pid`

Now push these up to your app server. You might want to try starting the app server through SSH first time round to make sure nothing funky happens:

$ app_ctl stop && app_ctl start

All going well, your application should now be served by the magical combo of Nginx and Unicorn!

Caveats

I have noticed that the first few requests can be painfully slow as the servers start spinning properly (part of this could also be my horrible connection). This is to be expected really as, from the client side and because of the OpenShift architecture, the traffic has to pass like so:

<----------REQUEST---------><--------RESPONSE--------->
Browser->Apache->Nginx->Unicorn->Nginx->Apache->Browser

Passing through three servers as well as all the routing is going to take a while the first time round! The upside is that this combination caches really well and requests only get better the higher they go. Some tweaking is probably needed but for now (tip: try the worker_processes var in the unicorn config file for starters), this works until I can get round to finishing my cartridges.

Contact etc.

© 2012 Mark Anthony Gibbins xiy3x0@gmail.com

Twitter: @xiy

Blog: http://pyramidthoughts.wordpress.com

Licensed under the MIT license.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

#!/usr/bin/env ruby
# rewrites any environment variable enclosed like so - $PATH$ - as it's string value.
# usage: ruby -pi.bak env2string.rb nginx.conf
gsub(/\${1}\w+\${1}/) do |e|
ENV[e.gsub!('$', '')]
end
# make it nice and tidy ;)
cd $OPENSHIFT_DATA_DIR/nginx-unicorn
mkdir $OPENSHIFT_DATA_DIR/nginx
mkdir build
cd build
# download, build and install nginx into our data directory.
# pcre is needed to build nginx, so we also download that.
wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.30.tar.gz
wget http://nginx.org/download/nginx-1.2.0.tar.gz
tar -xvf pcre-8.30.tar.gz
tar -xvf nginx-1.2.0.tar.gz
cd nginx-1.2.0
./configure --prefix=$OPENSHIFT_DATA_DIR/nginx --with-pcre=$OPENSHIFT_DATA_DIR/nginx-unicorn/build/pcre-8.30
make && make install && make clean
# Copy the config files to their install locations
cd $OPENSHIFT_DATA_DIR/nginx-unicorn
cp env2string.rb $OPENSHIFT_DATA_DIR/nginx/conf
cp nginx.conf $OPENSHIFT_DATA_DIR/nginx/conf
cp unicorn.rb $RAILS_ROOT/config
# This little bit of magic substitutes all our environment vars for their string values.
cd $OPENSHIFT_DATA_DIR/nginx/conf
ruby -pi.bak env2string.rb nginx.conf
echo " "
echo " "
echo "========================================================"
echo "=== NGINX+UNICORN SUCCESSFULLY INSTALLED! ==="
echo "========================================================"
echo "=== NOTE: You might want to delete the build dir ==="
echo "=== at $OPENSHIFT_DATA_DIR/nginx-unicorn/build ==="
echo "=== to save on your quota! ==="
echo "========================================================"
echo " "
echo " "
# This is example contains the bare mininum to get nginx going with
# Unicorn or Rainbows! servers. Generally these configuration settings
# are applicable to other HTTP application servers (and not just Ruby
# ones), so if you have one working well for proxying another app
# server, feel free to continue using it.
#
# The only setting we feel strongly about is the fail_timeout=0
# directive in the "upstream" block. max_fails=0 also has the same
# effect as fail_timeout=0 for current versions of nginx and may be
# used in its place.
#
# Users are strongly encouraged to refer to nginx documentation for more
# details and search for other example configs.
# you generally only need one nginx worker unless you're serving
# large amounts of static files which require blocking disk reads
worker_processes 1;
# # drop privileges, root is needed on most systems for binding to port 80
# # (or anything < 1024). Capability-based security may be available for
# # your system and worth checking out so you won't need to be root to
# # start nginx to bind on 80
# user nobody nogroup; # for systems with a "nogroup"
# user nobody nobody; # for systems with "nobody" as a group instead
# Feel free to change all paths to suite your needs here, of course
pid $OPENSHIFT_REPO_DIR$/tmp/nginx.pid;
error_log $OPENSHIFT_LOG_DIR$/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
# use epoll; # enable for Linux 2.6+
# use kqueue; # enable for FreeBSD, OSX
}
http {
# nginx will find this file in the config directory set at nginx build time
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
# click tracking!
access_log $OPENSHIFT_LOG_DIR$/nginx.access.log combined;
# you generally want to serve static files with nginx since neither
# Unicorn nor Rainbows! is optimized for it at the moment
sendfile on;
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
# we haven't checked to see if Rack::Deflate on the app server is
# faster or not than doing compression via nginx. It's easier
# to configure it all in one place here for static files and also
# to disable gzip for clients who don't get gzip/deflate right.
# There are other gzip settings that may be needed used to deal with
# bad clients out there, see http://wiki.nginx.org/NginxHttpGzipModule
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/html text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
# this can be any application server, not just Unicorn/Rainbows!
upstream unicorn {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:$RAILS_ROOT$/tmp/unicorn.sock fail_timeout=0;
# for TCP setups, point these to your backend servers
# server 192.168.0.7:8080 fail_timeout=0;
# server 192.168.0.8:8080 fail_timeout=0;
# server 192.168.0.9:8080 fail_timeout=0;
}
server {
listen $OPENSHIFT_INTERNAL_IP$:$OPENSHIFT_INTERNAL_PORT$ default deferred; # for Linux
# If you have IPv6, you'll likely want to have two separate listeners.
# One on IPv4 only (the default), and another on IPv6 only instead
# of a single dual-stack listener. A dual-stack listener will make
# for ugly IPv4 addresses in $remote_addr (e.g ":ffff:10.0.0.1"
# instead of just "10.0.0.1") and potentially trigger bugs in
# some software.
# listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
client_max_body_size 4G;
server_name _;
# ~2 seconds is often enough for most folks to parse HTML/CSS and
# retrieve needed images/icons/frames, connections are cheap in
# nginx so increasing this is generally safe...
keepalive_timeout 5;
# path for static files
root $RAILS_ROOT$/public;
# Prefer to serve static files directly from nginx to avoid unnecessary
# data copies from the application server.
#
# try_files directive appeared in in nginx 0.7.27 and has stabilized
# over time. Older versions of nginx (e.g. 0.6.x) requires
# "if (!-f $request_filename)" which was less efficient:
# http://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
try_files $uri/index.html $uri.html $uri @app;
location @app {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you forward HTTPS traffic to unicorn,
# this helps Rack set the proper URL scheme for doing redirects:
# proxy_set_header X-Forwarded-Proto $scheme;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll/streaming. It's also safe to set if you're using
# only serving fast clients with Unicorn + nginx, but not slow
# clients. You normally want nginx to buffer responses to slow
# clients, even with Rails 3.1 streaming because otherwise a slow
# client can become a bottleneck of Unicorn.
#
# The Rack application may also set "X-Accel-Buffering (yes|no)"
# in the response headers do disable/enable buffering on a
# per-response basis.
# proxy_buffering off;
proxy_pass http://unicorn;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root $RAILS_ROOT$/public;
}
}
}
# use at least one worker per core if you're on a dedicated server,
# more will usually help for _short_ waits on databases/caches.
worker_processes 2
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
# Since Unicorn is never exposed to outside clients, it does not need to
# run on the standard HTTP port (80), there is no reason to start Unicorn
# as root unless it's from system init scripts.
# If running the master process as root and the workers as an unprivileged
# user, do this to switch euid/egid in the workers (also chowns logs):
# user "unprivileged_user", "unprivileged_group"
# Make sure Unicorn spawns in the correct path. Change 'rvmrails' to your app name.
APP_PATH = ENV["RAILS_ROOT"]
working_directory APP_PATH
# Make sure we can tail the Unicorn logs through Foreman.
stderr_path File.join(ENV["OPENSHIFT_LOG_DIR"], "unicorn.error.log")
stdout_path File.join(ENV["OPENSHIFT_LOG_DIR"], "unicorn.out.log")
# Create a pidfile so we can kill the server through our action hooks.
pid File.join(APP_PATH, "tmp/pids/unicorn.pid")
# Listen using a UNIX domain socket.
listen File.join(APP_PATH, "tmp/sockets/unicorn.sock"), :backlog => 64
# Combine ruby2.0.0dev or REE with "preload_app true" fore memory savings
# http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
#
# This is *untested* so its disabled by default.
# preload_app true
# GC.respond_to?(:copy_on_write_friendly=) and
# GC.copy_on_write_friendly = true
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
# The following is only recommended for memory/DB-constrained
# installations. It is not needed if your system can house
# twice as many worker_processes as you have configured.
#
# This allows a new master process to incrementally
# phase out the old master process with SIGTTOU to avoid a
# thundering herd (especially in the "preload_app false" case)
# when doing a transparent upgrade. The last worker spawned
# will then kill off the old master process with a SIGQUIT.
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
# Throttle the master from forking too quickly by sleeping. Due
# to the implementation of standard Unix signal handlers, this
# helps (but does not completely) prevent identical, repeated signals
# from being lost when the receiving process is busy.
# sleep 1
end
after_fork do |server, worker|
# per-process listener ports for debugging/admin/migrations
# addr = "127.0.0.1:#{9293 + worker.nr}"
# server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true)
# the following is *required* for Rails + "preload_app true",
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment