Skip to content

Instantly share code, notes, and snippets.

View jfrost's full-sized avatar

Jeff Frost jfrost

  • PostgreSQL Experts, Inc.
View GitHub Profile
@jfrost
jfrost / gist:697b61c9844557527726
Created March 24, 2015 03:30
requeue 50,000 failed jobs
50_000.downto(0) { |i| Resque::Failure.requeue(i); Resque::Failure.remove(i) }
@jfrost
jfrost / gist:849db4c0a585e417caca
Created March 22, 2015 00:51
What are my working resque workers working on?
Resque.working.each {|w| puts w, w.processing }
if os.getuid() != 0:
print _("You need to be root to run this application")
sys.exit(1)
# nice & ionce
os.nice(19)
subprocess.call(["ionice","-c3", "-p",str(os.getpid())])
# run the main code
main(options)
@jfrost
jfrost / mksha512sum.py
Created February 3, 2015 18:30
How to make a sha512sum for use in /etc/shadow in python
#!/usr/bin/python
import crypt
import random
import string
def getsalt(mysalt = "", chars = string.letters + string.digits):
# generate a random 16-character 'salt'
require 'timeout'
# The resolvable mixin defines behavior for evaluating and returning fact
# resolutions.
#
# Classes including this mixin should implement at #name method describing
# the value being resolved and a #resolve_value that actually executes the code
# to resolve the value.
module Facter::Core::Resolvable
@jfrost
jfrost / gist:e36a4fdd2812b64c3ac9
Created November 5, 2014 21:12
pgbouncer simple benchmark results
So, I did a quick test with pgbouncer running on the local host and talking to a remote postgresql backend. I had the script open a connection, run a quick almost instantaneous query (SELECT now() ), collect the results and close the connection. It looped over this 50,000 times.
Connecting to the bouncer over local unix socket, it took 31s to perform all the queries.
Connecting to the bouncer over localhost, it took 45s to perform all the queries.
Connecting to the bouncer running on the remote server, it took 1m6s
Without using pgbouncer, it took 3m34s
#!/usr/bin/ruby
# Queries a PostgreSQL database and publishes statistics to Ganglia using gmetric.
#
# == Install Dependencies ==
#
# sudo apt-get install ruby ganglia-monitor build-essential
#
# == Usage ==
#
# postgres_gmetric.rb <databasename>
@jfrost
jfrost / slony_set_add_table_sequence.sql
Last active August 29, 2015 14:07
Generate your SET ADD TABLE and SET ADD SEQUENCE slonik statements for initial subscription. This will only add tables with primary keys.
SELECT 'SET ADD TABLE (SET id = 1, origin = 1, FULL QUALIFIED NAME = ''' || nspname || '.' || relname || ''', comment=''' || nspname || '.' || relname || ' TABLE'');' FROM pg_class JOIN pg_namespace ON relnamespace = pg_namespace.oid WHERE relkind = 'r' AND relhaspkey AND nspname NOT IN ('information_schema', 'pg_catalog') ORDER BY pg_total_relation_size(pg_class.oid) DESC;
SELECT 'SET ADD SEQUENCE (SET id = 1, origin = 1, FULL QUALIFIED NAME = ''' || n.nspname || '.' || c.relname || ''', comment=''' || n.nspname || '.' || c.relname || ' SEQUENCE'');' FROM pg_class c, pg_namespace n WHERE c.relnamespace = n.OID AND c.relkind = 'S';
@jfrost
jfrost / pgpool.conf
Last active August 29, 2015 14:07
pgpool2 watchdog config with sudo and ip addr instead of ifconfig
# Watchdog
#------------------------------------------------------------------------------
use_watchdog = on
delegate_IP = '10.10.10.100'
wd_hostname = '10.10.10.21'
wd_port = 9000
ifconfig_path = '/usr/bin'
arping_path = '/usr/bin'
if_up_cmd = 'sudo ip addr add $_IP_$ dev eth0'
if_down_cmd = 'sudo ip addr del $_IP_$ dev eth0'
rsync -e "ssh -c arcfour postgres@192.168.6.10" -rLKpts --delete-excluded --inplace --exclude='/pg_xlog/*' --exclude='/pg_log/*' --exclude=/recovery.conf --exclude=/postmaster.pid :/mnt/pgdata/9.2/main/ /tmp/main/