from __future__ import absolute_import
import os
from subprocess import Popen, PIPE
import datetime
import time
from celery import Celery
from celery import states
from os.path import dirname, join
pip install selenium
python <namecheap_account> <namecheap_password> <domain>
View README.txt
Best paired with these JS:
Usage: [options]
-h, --help show this help message and exit
-r ROWS, --rows=ROWS Set row height
-c COLUMNS, --columns=COLUMNS
Set column width
# botnet .IptabLes & .IptabLex cleaner
# tested on my ec2 based on Amazon AMI
# here i attached my version ->
# @judotens
# 1.1M
View gist:d638a6d9093f5a83de34

Share Counts

I have always struggled with getting all the various share buttons from Facebook, Twitter, Google Plus, Pinterest, etc to align correctly and to not look like a tacky explosion of buttons. Seeing a number of sites rolling their own share buttons with counts, for example The Next Web I decided to look into the various APIs on how to simply return the share count.

If you want to roll up all of these into a single jQuery plugin check out Sharrre

Many of these API calls and methods are undocumented, so anticipate that they will change in the future. Also, if you are planning on rolling these out across a site I would recommend creating a simple endpoint that periodically caches results from all of the APIs so that you are not overloading the services will requests.


# Quick and dirty demonstration of CVE-2014-0160 by Jared Stafford (
# The author disclaims copyright to this source code.
import sys
import struct
import socket
import time
import select
from StringIO import StringIO
import gzip, BeautifulSoup, sys, urllib2, urllib
main_url = ""
def buka(url):
request = urllib2.Request(url)
request.add_header('Accept-encoding', 'gzip')
response = urllib2.urlopen(request)
if'Content-Encoding') == 'gzip':
buf = StringIO(
# scrape all leaked bitcoin private keys into a tab separated text
# <private key>\t<bitcoin_address>
# support autoresume. just add these line into your cron : * * * * bash
# results stored on keys.txt
if [ ! -f ]; then prev=`echo 0`; else prev=`cat`; fi;
if [ -z $1 ]; then akhir=`echo 10`; else akhir=`echo $1`; fi;
View gist:7703243


Twitter for iPhone

Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU

Twitter for Android

Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys

Twitter for Google TV

Consumer key: iAtYJ4HpUVfIUoNnif1DA
# scrape tweets from
import sys, urllib, urllib2, json, random
def search(query):
data = {'q': query, 'type': 'tweet', 'offset': 1, 'perpage': 1000, 'window': 'a', 'sort_method': "-date", 'apikey': '09C43A9B270A470B8EB8F2946A9369F3'}
url = "" + urllib.urlencode(data)
data = urllib2.urlopen(url)
o = json.loads(
res = o['response']