Skip to content

Instantly share code, notes, and snippets.

@sudara
Last active November 3, 2023 12:03
Show Gist options
  • Save sudara/8653130 to your computer and use it in GitHub Desktop.
Save sudara/8653130 to your computer and use it in GitHub Desktop.
Example config needed to use monit with puma, monitoring workers for mem.
# this monit config goes in /etc/monit/conf.d
check process puma_master
with pidfile /data/myapp/current/tmp/puma.pid
start program = "/etc/monit/scripts/puma start"
stop program = "/etc/monit/scripts/puma stop"
group myapp
check process puma_worker_0
with pidfile /data/myapp/current/tmp/puma_worker_0.pid
if totalmem is greater than 230 MB for 2 cycles then exec "/etc/monit/scripts/puma kill_worker 0"
check process puma_worker_1
with pidfile /data/myapp/current/tmp/puma_worker_1.pid
if totalmem is greater than 230 MB for 2 cycles then exec "/etc/monit/scripts/puma kill_worker 1"
check process puma_worker_2
with pidfile /data/myapp/current/tmp/puma_worker_2.pid
if totalmem is greater than 230 MB for 2 cycles then exec "/etc/monit/scripts/puma kill_worker 2"
check process puma_worker_3
with pidfile /data/myapp/current/tmp/puma_worker_3.pid
if totalmem is greater than 230 MB for 2 cycles then exec "/etc/monit/scripts/puma kill_worker 3"
# This goes in the app as config/puma.rb
environment 'production'
workers 4
threads 1,4
preload_app!
daemonize true
pidfile 'tmp/puma.pid'
stdout_redirect 'log/puma.log', 'log/puma.log', true
bind 'unix://tmp/puma.sock'
state_path 'tmp/puma.state'
on_worker_boot do |worker_index|
# write worker pid
File.open("tmp/puma_worker_#{worker_index}.pid", "w") { |f| f.puts Process.pid }
# reconnect to redis
Redis.current.client.reconnect
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
#!/usr/bin/env bash
# This monit wrapper script will be called by monit as root
# Edit these variables to your liking
RAILS_ENV=production
USER=myapp
APP_DIR=/data/myapp/current
PUMA_CONFIG_FILE=$APP_DIR/config/puma.rb
PUMA_PID_FILE=$APP_DIR/tmp/puma.pid
PUMA_SOCKET=$APP_DIR/tmp/puma.sock
# check if puma process is running
puma_is_running() {
if [ -S $PUMA_SOCKET ] ; then
if [ -e $PUMA_PID_FILE ] ; then
if cat $PUMA_PID_FILE | xargs pgrep -P > /dev/null ; then
return 0
else
echo "No puma process found"
fi
else
echo "No puma pid file found"
fi
else
echo "No puma socket found"
fi
return 1
}
case "$1" in
start)
echo "Starting puma..."
rm -f $PUMA_SOCKET
if [ -e $PUMA_CONFIG_FILE ] ; then
echo "cd $APP_DIR && RAILS_ENV=$RAILS_ENV bundle exec puma -C $PUMA_CONFIG_FILE"
/bin/su - $USER -c "cd $APP_DIR && RAILS_ENV=$RAILS_ENV bundle exec puma -C $PUMA_CONFIG_FILE"
else
echo "No config file found"
/bin/su - $USER -c "cd $APP_DIR && RAILS_ENV=$RAILS_ENV bundle exec puma --daemon --bind unix://$PUMA_SOCKET --pidfile $PUMA_PID_FILE"
fi
echo "done"
;;
stop)
echo "Stopping puma..."
kill -s SIGTERM `cat $PUMA_PID_FILE`
rm -f $PUMA_PID_FILE
rm -f $PUMA_SOCKET
echo "done"
;;
restart)
if puma_is_running ; then
echo "Hot-restarting puma..."
kill -s SIGUSR2 `cat $PUMA_PID_FILE`
echo "Doublechecking the process restart..."
sleep 15
if puma_is_running ; then
echo "done"
exit 0
else
echo "Puma restart failed :/"
fi
fi
;;
phased_restart)
if puma_is_running ; then
echo "Phased-restarting puma..."
kill -s SIGUSR1 `cat $PUMA_PID_FILE`
echo "Doublechecking the process restart..."
sleep 10
if puma_is_running ; then
echo "done"
exit 0
else
echo "Puma restart failed :/"
fi
fi
;;
kill_worker*)
if [ -z "$2" ];then
logger -t "unicorn_${APP}" -s "kill_worker called with no worker identifier"
exit 1
fi
PID_DIR=`dirname $PUMA_PID_FILE`
kill -s QUIT `cat ${PID_DIR}/puma_worker_$2.pid`
STATUS=$?
exit $STATUS
;;
*)
echo "Usage: puma {start|stop|restart|kill_worker 0,1,2,etc}" >&2
;;
esac
@joemocha
Copy link

This looks good.
Does this actually work?

@ziurjam
Copy link

ziurjam commented Jan 30, 2016

Yes it does

@dnlserrano
Copy link

Thanks man! ❤️

@saroar
Copy link

saroar commented Jul 5, 2016

hi i user use config here is result :(
[alif@ozrailsserver sites-enabled]$ ps aux | grep puma
alif 6929 1.8 3.1 467584 32508 ? D 02:28 0:02 puma: cluster worker 1: 6506 [10]
alif 6932 1.9 3.2 467584 32892 ? D 02:28 0:02 puma: cluster worker 0: 6506 [10]
alif 6938 1.9 3.1 467584 32272 ? R 02:28 0:02 puma: cluster worker 0: 6506 [10]
alif 6941 1.9 3.1 467584 31924 ? R 02:28 0:02 puma: cluster worker 1: 6506 [10]
alif 6948 2.0 3.0 467716 31348 ? D 02:28 0:02 puma: cluster worker 1: 6506 [10]
alif 6951 2.0 3.0 467716 31192 ? D 02:28 0:02 puma: cluster worker 0: 6506 [10]
alif 6956 2.0 3.0 467716 31440 ? D 02:28 0:02 puma: cluster worker 0: 6506 [10]
alif 6959 2.1 3.0 467716 31392 ? D 02:28 0:02 puma: cluster worker 1: 6506 [10]
alif 6964 2.1 3.5 467716 35736 ? D 02:28 0:02 puma: cluster worker 1: 6506 [10]
alif 6966 2.1 3.5 467716 36136 ? D 02:28 0:02 puma: cluster worker 0: 6506 [10]
alif 6972 2.1 3.4 467716 35204 ? R 02:28 0:01 puma: cluster worker 0: 6506 [10]
alif 6975 2.1 3.5 467716 36224 ? D 02:28 0:01 puma: cluster worker 1: 6506 [10]
alif 6983 2.1 3.9 467716 40284 ? R 02:29 0:01 puma: cluster worker 0: 6506 [10]
alif 6986 2.2 3.9 467716 39676 ? D 02:29 0:01 puma: cluster worker 1: 6506 [10]
alif 6991 2.2 3.8 467716 39552 ? R 02:29 0:01 puma: cluster worker 0: 6506 [10]
alif 6994 2.3 4.0 467716 40888 ? R 02:29 0:01 puma: cluster worker 1: 6506 [10]
alif 7014 0.0 0.0 112904 920 pts/0 S+ 02:30 0:00 grep --color=auto puma

2016/07/06 02:17:26 [alert] 5241#5241: 1024 worker_connections are not enough
2016/07/06 02:17:26 [error] 5241#5241: *8601 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: 188.166.162.99, request: "GET / HTTP/1.0", upstream: "http://127.0.53.53:80/500.html", host: "188.166.162.99"
2016/07/06 02:20:53 [alert] 5242#5242: 1024 worker_connections are not enough
2016/07/06 02:20:53 [error] 5242#5242: *9755 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: 188.166.162.99, request: "GET / HTTP/1.0", upstream: "http://127.0.53.53:80/", host: "188.166.162.99"
2016/07/06 02:20:53 [alert] 5242#5242: 1024 worker_connections are not enough
2016/07/06 02:20:53 [error] 5242#5242: *9755 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: 188.166.162.99, request: "GET / HTTP/1.0", upstream: "http://127.0.53.53:80/500.html", host: "188.166.162.99"
2016/07/06 02:21:17 [alert] 5242#5242: 1024 worker_connections are not enough
2016/07/06 02:21:17 [error] 5242#5242: *10779 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: 188.166.162.99, request: "GET / HTTP/1.0", upstream: "http://127.0.53.53:80/", host: "188.166.162.99"
2016/07/06 02:21:17 [alert] 5242#5242: 1024 worker_connections are not enough
2016/07/06 02:21:17 [error] 5242#5242: *10779 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: 188.166.162.99, request: "GET / HTTP/1.0", upstream: "http://127.0.53.53:80/500.html", host: "188.166.162.99"

@jun1st
Copy link

jun1st commented Oct 12, 2017

check process puma_master
  with pidfile  /data/myapp/current/tmp/puma.pid
  start program = "/etc/monit/scripts/puma start"
  stop program = "/etc/monit/scripts/puma stop" 
  group myapp

what's the meaning of group myapp at the end of this script?

@mleszcz
Copy link

mleszcz commented Apr 19, 2018

@jun1st

https://mmonit.com/monit/documentation/monit.html

SERVICE GROUPS
Service entries in the control file, monitrc, can be grouped together by the group statement. The syntax is simply (keyword in capital):

  GROUP groupname
With this statement it is possible to group similar service entries together and manage them as a whole. Monit provides functions to start, stop, restart, monitor and unmonitor a group of services, like so:

To start a group of services from the console:

  monit -g <groupname> start
To stop a group of services:

  monit -g <groupname> stop
To restart a group of services:

  monit -g <groupname> restart
A service can be added to multiple groups by using more than one group statement:

  group www
  group filesystem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment