Skip to content

Instantly share code, notes, and snippets.

View apinstein's full-sized avatar

Alan Pinstein apinstein

  • Atlanta, GA
View GitHub Profile
@apinstein
apinstein / ipfw bandwidth throttle.sh
Created August 3, 2009 19:38
ipfw bandwidth throttling
#!/bin/sh
#
# Use ipfw to throttle bandwidth.
# usage:
# ./throttle.sh # Throttle at default (60KB/s)
# ./throttle.sh 5 # Throttle at custom speed (5KB/s)
# ./throttle.sh off # Turn throttling off
# flush rules
ipfw del pipe 1
@apinstein
apinstein / vfsStream example
Created December 29, 2009 19:39
vfsStream unit test demo
require_once('vfsStream/vfsStream.php');
vfsStreamWrapper::register();
makeTree(array(
'root/file.php',
'root/a/a.php',
'root/a/b/b.php',
'root/a/b/c/c.php',
'root/b/',
'root/c/d/',
FAQ
Q. Is this site supposed to replace PEAR's package hosting at pear.php.net?

A. Pearfarm is a more open alternative to pear.php.net's packages hosting service. Anyone can host a pear package on pearfarm without going through any kind of proposal/approval process.
Q. Why is Pearfarm's approach better than PEAR's package hosting?

A. The process for creating a new PEAR package is slow, strict, and onerous. Pearfarm follows a different philosophy. We believe that it's up to the community to decide which packages are good or not. We don't mandate any particular coding style, naming convention, or structure. This openness allows for more creativity and innovation at a faster pace than the PEAR process does. Of course, we encourage good and understandable code so we can all be happy!
Q. Is publishing a PEAR package really that hard?
A. Before Pearfarm, in order to publish a package you had to learn the package.xml spec or PEAR_PackageManager2. The learning curve wasn't that small. Once you managed to build a
@apinstein
apinstein / gist:276500
Created January 13, 2010 19:21
shell script hrm...
#!/bin/sh
# shell script quoting problem demonstration
# I need to be able to set a shell variable with a command with some options, like so
PHP_COMMAND="php -d 'include_path=/path/with spaces/dir'"
# then use PHP_COMMAND to run something in another script, like this:
$PHP_COMMAND -r 'echo get_include_path();'
# the above fails when executed. However, if you copy/paste the put from this line and run it in the CLI, it works!
echo "$PHP_COMMAND -r 'echo get_include_path();'"
After a lot of different workflows, we've finally settled on this workflow for using git internally. It offers several benefits and avoid a bunch of pitfalls we were running into. It isn't perfect for everyone, of course, but it's worked very well for us.
Benefits:
- Makes it easy to share a topic branch among multiple developers without lots of conflicts and rebase hell.
- Preserves logical history of "started working on topic at this point, merged it into master at this point).
- Still allows for frequent rebasing against master to ensure you stay up-to-date with the mainline.
---> master
+---> topic-integration (shared across users via github)
+----> topic-devA (local branch in devA's repo)
@apinstein
apinstein / navigator.geolocation.getAccuratePosition
Created August 5, 2010 19:48
navigator.geolocation.getAccuratePosition
// navigator.geolocation.getAccuratePosition() is an improved version of navigator.geolocation.getCurrentPosition()
//
// getAccuratePosition() is designed and tested for iOS.
//
// The problem with getCurrentPosition() is that it returns an inaccurate position even with "enableHighAccuracy" enabled.
// The problem with watchPosition() is that it calls the success handler every second, and it is resource-intensive.
//
// getAccuratePosition() calls the callback only once, but uses watchLocation() internally to obtain a position that meets your accuracy needs.
// If the timeout is exceeded before a position meeting your accuracy is needed, the best position is returned to the
// success callback, and the error callback does not fire. If you really care about accuracy, you should check it
@apinstein
apinstein / executeAfter()
Created October 25, 2010 19:59
Simple function wrapper that can be applied to any function to allow it to be called frequently, but only run when it has not been called within a certain amount of time. Great way to throttle specific types of requests.
// Magic-ness to only run the save callback after no saves have been issued for a while
var executeAfter = function(f, ms)
{
var timer;
var wrapper = function()
{
var passedArguments = arguments;
if (timer)
{
window.clearTimeout(timer);
@apinstein
apinstein / xdebug minimal config
Created December 2, 2010 05:25
commented xdebug.ini showing minimal config to get debugging, profiling, and tracing working, along with some instructions
zend_extension=/path/to/xdebug.so
; enable starting debug with XDEBUG_SESSION_START=1
xdebug.remote_enable=1
; enable starting profiler with XDEBUG_PROFILE=1
xdebug.profiler_enable_trigger=1
; good idea to do this explicitly b/c it's hard to tell where it went otherwise
xdebug.profiler_output_dir=/tmp/
; To generate TRACES of script execution; see http://www.xdebug.org/docs/execution_trace
; this will happen ALWAYS (or set to 0 and use xdebug_start_trace; sadly no XDEBUG_XXX method to enable this via URL)
xdebug.auto_trace=0
@apinstein
apinstein / Mac OS X
Created February 12, 2011 22:49
Want to see what processes are using your bandwidth? Don't have ntop?
#!/bin/sh
# Try this... kinda a hack but better than any other alternative I know of:
for i in `netstat -an | grep EST | awk '{ print $5; }' | sort | uniq | grep -v 127.0.0.1 | grep -v ::1 | sed -e 's/\([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*\).*/\1/'`; echo -n "Checking IP: $i => " && dig +short -x $i && echo -n " PROCESS: " && (lsof -i | grep $i) && echo ""
@apinstein
apinstein / S3 Security Model
Created June 23, 2011 19:36
Notes on S3 Security/Permissions Model
How S3 Permission Work
- The AWS account that creates a bucket owns it.
- The owner of a bucket can never be changed.
- All billing for object usage goes to bucket owner account by default. That's one reason ownership cannot be changed.
- Note that objects in the bucket can have permissions that would prevent even the bucket owner from editing/deleting it.
- There are three styles of permissions:
1. Bucket Policies
- Allows access control to be specified for AWS Accounts or IAM Users
- Specified in Access Policy Language
- Can DENY or ALLOW