Skip to content

Instantly share code, notes, and snippets.

@andruby
Created January 26, 2011 19:48
Show Gist options
  • Star 54 You must be signed in to star a gist
  • Fork 22 You must be signed in to fork a gist
  • Save andruby/797301 to your computer and use it in GitHub Desktop.
Save andruby/797301 to your computer and use it in GitHub Desktop.
Start and Stop tasks for resque workers, with capistrano deploy hook (without God)
after "deploy:symlink", "deploy:restart_workers"
##
# Rake helper task.
# http://pastie.org/255489
# http://geminstallthat.wordpress.com/2008/01/27/rake-tasks-through-capistrano/
# http://ananelson.com/said/on/2007/12/30/remote-rake-tasks-with-capistrano/
def run_remote_rake(rake_cmd)
rake_args = ENV['RAKE_ARGS'].to_s.split(',')
cmd = "cd #{fetch(:latest_release)} && #{fetch(:rake, "rake")} RAILS_ENV=#{fetch(:rails_env, "production")} #{rake_cmd}"
cmd += "['#{rake_args.join("','")}']" unless rake_args.empty?
run cmd
set :rakefile, nil if exists?(:rakefile)
end
namespace :deploy do
desc "Restart Resque Workers"
task :restart_workers, :roles => :db do
run_remote_rake "resque:restart_workers"
end
end
# Start a worker with proper env vars and output redirection
def run_worker(queue, count = 1)
puts "Starting #{count} worker(s) with QUEUE: #{queue}"
ops = {:pgroup => true, :err => [(Rails.root + "log/resque_err").to_s, "a"],
:out => [(Rails.root + "log/resque_stdout").to_s, "a"]}
env_vars = {"QUEUE" => queue.to_s}
count.times {
## Using Kernel.spawn and Process.detach because regular system() call would
## cause the processes to quit when capistrano finishes
pid = spawn(env_vars, "rake resque:work", ops)
Process.detach(pid)
}
end
namespace :resque do
task :setup => :environment
desc "Restart running workers"
task :restart_workers => :environment do
Rake::Task['resque:stop_workers'].invoke
Rake::Task['resque:start_workers'].invoke
end
desc "Quit running workers"
task :stop_workers => :environment do
pids = Array.new
Resque.workers.each do |worker|
pids.concat(worker.worker_pids)
end
if pids.empty?
puts "No workers to kill"
else
syscmd = "kill -s QUIT #{pids.join(' ')}"
puts "Running syscmd: #{syscmd}"
system(syscmd)
end
end
desc "Start workers"
task :start_workers => :environment do
run_worker("*", 2)
run_worker("high", 1)
end
end
@tuplebunny
Copy link

Dude. Thank you.

We are running Resque-workers (long-running rake tasks).

We want to start them inside a Capistrano hook.

This means:

  1. We want to type "cap production deploy".
  2. When the Capistrano script ends, we are disconnected from our remote machines.
  3. When the Capistrano script ends, the Resque-workers started by the Capistrano script are still running on our remote machines.

We've gotten Capistrano to execute rake tasks on a remote machine. We are also able to fork the tasks, using a variety of methods, including &, BACKGROUND=yes, ssh-ing a command, screen -d -m -X, etc.

Each of the above "worked" in varying capacities, but ultimately, when the Capistrano script ends, the connection to the remote machine is severed, and the rake tasks running on the remote machine are terminated.

From your gist, we applied the bare minimum:

Process.detach(spawn({'QUEUE'=>'*'}, 'rake resque:work', {pgroup: true}))

We put the above into a "standard Rails-rake task", inside lib/tasks/application.rake. We ask Capistrano to run our task inside application.rake, and ... and then it works.

Brilliant. Beautiful. Better still, it works. It works. It works. Thank you.

@1v
Copy link

1v commented Nov 21, 2015

    Resque.workers.each do |worker|
      pids = pids | worker.worker_pids[0...-1]
    end

@shadoath
Copy link

Like to point out that this can be accomplished with a gem now. Checkout: https://github.com/sshingler/capistrano-resque

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment