Skip to content

Instantly share code, notes, and snippets.

while True:
# XXX need to yield to
bytes = self.message_spool_file.read(1024)
if not bytes:
for protocol in protocols:
protocol.finish()
break
def make_worker(protocol):
print "*"*30, "Made a worker for %s bytes" % (len(bytes),)
while True:
# XXX need to yield to
bytes = self.message_spool_file.read(1024)
if not bytes:
for protocol in protocols:
protocol.finish()
break
def make_worker(protocol):
print "*"*30, "Made a worker for %s bytes" % (len(bytes),)
while True:
# XXX need to yield to
bytes = self.message_spool_file.read(1024)
if not bytes:
for protocol in protocols:
protocol.finish()
break
def worker(protocol):
print "*"*30, "Writing %s bytes" % (len(bytes),)
@robhaswell
robhaswell / gist:2368828
Created April 12, 2012 16:26
mod_actions script path is being rewritten by mod_rewrite
The problem we're having is that mod_actions is interfering with %{REQUEST_URI} in a
RewriteCond when used to prevent possibly looping RewriteRules. Our .htaccess looks like
this:
RewriteEngine on
RewriteBase /
RewriteCond %{REQUEST_URI} !^/test
RewriteRule ^(.*)$ /test/$1 [L]
And the internal redirect chain is happening like this:
@robhaswell
robhaswell / gist:3181495
Created July 26, 2012 10:57
ZFS recovery challenge
This problem concerns the TPJ node "130" (178.33.229.130), on Friday it suffered a failure with out-of-date replication which was then stashed. The concern at the moment is filesystem mail-1636, there are emails in which you can see in the AP-SMTP log that do not exist on the filesystem.
There is an hcl command which should recover these files:
hcl promote_stash -f mail-1636 -s 1342800864628
This attempts to mount a clone of the filesystem on /hcfs-tmp, and recv the stashed snapshot from this file into it:
/hcfs-stash/mail-1636/1342800864628-1342788663011_178.33.229.130_1342786125589_178.33.229.130_0-1342800590083_178.33.229.130_1342788663011_178.33.229.130_0/data
@robhaswell
robhaswell / gist:3293536
Created August 8, 2012 08:44
sync-images output
chroot-website.com:/usr/local/share/php/symfony/config# logout
hybrid@htz138896:~$ hcl snapshot-image image-tpjenv
Taking snapshot version 10 of image tpjenv
Done, resulting images:
=== Available images ===
basejail @v0.7
php52apache @v0.9
twisted @v0.4
phpapache @v0.24
[15:18:55] Angelo RC: there are a command for showing what process uses most disk write?
[15:19:17] Rob Haswell: not that I know of, I'm afraid
[15:19:56] Rob Haswell: you can use "iostat" to see the current disk IO utilisation
[15:20:01] Angelo RC: yes
[15:20:06] Angelo RC: because i have two server
[15:20:09] Angelo RC: in one
[15:20:14] Angelo RC: root@rcs7 [~]# iostat -m
Linux 2.6.32-279.5.1.el6.x86_64 (rcs7.xdom.it) 22/08/2012 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
hybridcluster.tests.test_juggling.ComputeBestSlavesTests.test_idealSlaves
===============================================================================
[FAIL]
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.3.0-py2.7-macosx-10.8-x86_64.egg/twisted/internet/defer.py", line 138, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.3.0-py2.7-macosx-10.8-x86_64.egg/twisted/internet/_utilspy3.py", line 41, in runWithWarningsSuppressed
reraise(exc_info[1], exc_info[2])
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.3.0-py2.7-macosx-10.8-x86_64.egg/twisted/internet/_utilspy3.py", line 37, in runWithWarningsSuppressed
result = f(*a, **kw)
rescue-bsd# uname -a
FreeBSD rescue-bsd.ovh.net 8.3-RELEASE-p3 FreeBSD 8.3-RELEASE-p3 #0: Tue Jun 12 00:39:29 UTC 2012 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
rescue-bsd# fetch http://hybridcluster.net/bootstrap.sh ; chmod +x bootstrap.sh ; ./bootstrap.sh
bootstrap.sh 100% of 568 B 6944 kBps
mkdir: /root/bin: File exists
Fetching ftp://ftp.freebsd.org/pub/FreeBSD/ports/amd64/packages-8.3-release/Latest/bash.tbz...
/var: write failed, filesystem is full
share/doc/bash/FAQ: Write failed
share/doc/bash/CHANGES: Write failed
share/doc/bash/COMPAT: Write to restore size failed
@robhaswell
robhaswell / gist:5134307
Created March 11, 2013 13:41
Conversation with #zfs on recovering data
[13:18] <blSnoopy> Mopatop: eating 10mb off of a disk at both ends kill all zfs labels.. good luck with that :P
[13:18] * Wil_ has quit (Ping timeout: 276 seconds)
[13:18] <blSnoopy> kills*
[13:19] <Mopatop> but I thought ZFS was made by magical elves
[13:19] * Wil_ (~wil@175.156.134.175) has joined #zfs
[13:19] <sensille> I'm sure they can craft something for you
[13:19] <sensille> if you find them
[13:20] <Mopatop> I'm searching for them in the disks of a machine with an identical setup
[13:20] <sensille> is it a pool with just a single disk?
[13:20] <blSnoopy> you could try to manually find the uberblocks and perhaps recreate one of the labels