Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save duleorlovic/762c4ffdf43c8eb31aa7 to your computer and use it in GitHub Desktop.
Save duleorlovic/762c4ffdf43c8eb31aa7 to your computer and use it in GitHub Desktop.

Nginx and Puma for Ruby on Rails

When a lot of people are working on the same Rails application, than Vagrant could help to set up environment quick and easy. Even Vagrant is not recommended for production, it is very usefull for testing deployment script. For production we can simply copy deployment script and run manually.

We can deploy Ruby on Rails app using a quick way to deploy ruby on rails on vagrant. So clone that project with

git clone https://gist.github.com/8815e439905dbd326a41.git vagrant
vagrant init # this generates Vagrant file

Deploy on Digital Ocean

If we deployed to VirtualBox, than we know to deploy to Digital Ocean as well, we just need their API key. Edit Vagrantfile to add :digital_ocean provider:

# Vagrantfile
  config.vm.hostname = "myapp.example.com"
  config.vm.provider :digital_ocean do |provider, override|
    override.ssh.private_key_path = '~/.ssh/id_rsa'
    override.vm.box = 'digital_ocean'
    override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/master/box/digital_ocean.box"

    override.vm.provision :file, source: '~/.my_app_staging.env', destination: '/vagrant/.secrets.env'
    override.vm.provision :shell, path: 'vagrant/bootstrap_ruby_on_rails.sh', keep_color: true
    # we dont need those folders on production
    override.vm.synced_folder ".", "/vagrant", type: "rsync",
      rsync__exclude: [".git/", "tmp/", "log/", "lib/", "docs/", "public/"]

    provider.token = ENV["DIGITAL_OCEAN_TOKEN"]
    provider.image = 'ubuntu-14-04-x64'
    provider.region = 'nyc2'
    provider.size = '1gb'
    provider.ssh_key_name = `hostname`.strip # this key name is from https://cloud.digitalocean.com/settings/security
    # and is used for vagrant ssh. default is Vagrant and it will be created if public key is not already there
  end

To boot machine on digital ocean, you need to register DIGITAL_OCEAN_TOKEN API key. If you already added ssh key to your account than set it's name to provider.ssh_key_name = name param.

Before provisioning it, you need to install plugin vagrant plugin install [vagrant-digitalocean](https://github.com/smdahlen/vagrant-digitalocean). To see all images (regions, sizes) available: vagrant digitalocean-list images $DIGITAL_OCEAN_TOKEN. At the end run: vagrant up --provider=digital_ocean. If you want to resize its memory, you can simply change size and vagrant rebuild

We will use ~/.my_staging_secrets.env file to define all secrets that we use like export RAILS_ENV=production Virtualbox and Digital ocean provision scripts are running as root user. Please note that vagrant ssh on VirtualBox use vagrant user, but on Digital ocean its root. Since root access is not recomended on production its better to stick with deployer user. Some other advices for production deployment. To ssh using deployer user on VirtalBox run ssh -p 2222 deployer@127.0.0.1

Puma

Usually it is good practice to keep all configuration files on git repository so you can revert back if needed. We will use config/server folder and place several configuration files and deployment script there.

Some advices are taken from Digital Ocean

Files: puma.rb, puma-manager.conf, puma.conf and puma-project-list.conf, ngix.conf

sudo start puma-manager # to start all listed projects
sudo start/stop/restart puma app=/home/deploy/appname # for managing particular project
#!/bin/bash
# install_puma_and_nginx.sh
#set -x # or set -v is used for debugging this script
set -e # Any subsequent commands which fail will cause the shell script to exit immediately
# For the sake of simplicity, project folder is /vagrant
# server config are in /vagrant/config/server
echo STEP: install puma
sudo -i -u deployer /bin/bash -c 'cd /vagrant && gem install puma'
cp /vagrant/config/server/puma.conf /vagrant/config/server/puma-manager.conf /etc/init
echo "/vagrant" > /etc/puma-project-list.conf
mkdir /puma_shared
chown deployer:deployer /puma_shared
sudo -u deployer mkdir /puma_shared/pids /puma_shared/sockets /puma_shared/log
if [[ -f /vagrant/tmp/pids/server.pid ]]; then
echo STEP: stop current rails server
kill -9 `cat /vagrant/tmp/pids/server.pid` || rm /vagrant/tmp/pids/server.pid
fi
echo STEP: start puma
start puma-manager
echo STEP: install and configure nginx
apt-get -y install nginx
ln -s /vagrant/config/server/nginx.conf /etc/nginx/sites-enabled/default -f
service nginx restart
echo STEP: Visit your site on http://localhost:3000 or http://`ifconfig eth0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}'`
echo STEP: tail -f /var/log/syslog
echo STEP: tail -f tail -f /var/log/nginx/error.log
echo STEP: tail -f /puma_shared/log/puma.stderr.log
echo STEP: tail -f /vagrant/log/production.log
# create password for deployer, run as root:
# $ passwd deployer
# add your public key for easier login
# $ ssh-copy-id deployer@121...
# disable root login in /etc/ssh/sshd_config
# PermitRootLogin no
# service ssh restart
# https://www.digitalocean.com/community/tutorials/additional-recommended-steps-for-new-ubuntu-14-04-servers
# enable firewall for ssh and 80
# sudo ufw allow ssh # ssh 22
# sudo ufw allow 80/tcp # http
# sudo ufw allow 443/tcp # https
# sduo ufw allow 3306 # mysql
# sudo ufw show added
# sudo ufw enable
# config/server/nginx.conf
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/puma_shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /vagrant/public;
try_files $uri/index.html $uri @app;
location @app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid deployer
setgid deployer
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
source /home/deployer/.profile
logger -t puma `env`
logger -t puma "Starting server: $app"
exec bundle exec puma -C config/server/puma.rb
EOT
end script
# config/server/puma.rb
# Change to match your CPU core count
workers 2
# Min and Max threads per worker
threads 1, 6
app_dir = "/vagrant"
shared_dir = "/puma_shared" # shared is outside of vagrant since there is some permission problem if is inside /vagrant
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[rails_env])
end
@Sultan91
Copy link

Sultan91 commented Nov 6, 2019

Where is rails app path defined in puma.conf?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment