Skip to content

Instantly share code, notes, and snippets.

judotens /
Last active March 7, 2022 00:54
Celery Run Bash Project
from __future__ import absolute_import
import os
from subprocess import Popen, PIPE
import datetime
import time
from celery import Celery
from celery import states
from os.path import dirname, join
judotens /
Last active December 3, 2023 11:53
Export Namecheap domain zones without API
pip install selenium
python <namecheap_account> <namecheap_password> <domain>
judotens / README.txt
Last active June 19, 2021 17:38
Crosswords Generator
Best paired with these JS:
Usage: [options]
-h, --help show this help message and exit
-r ROWS, --rows=ROWS Set row height
-c COLUMNS, --columns=COLUMNS
Set column width
judotens /
Last active August 29, 2015 14:02
Botnet .IptabLes & .IptabLex cleaner
# botnet .IptabLes & .IptabLex cleaner
# tested on my ec2 based on Amazon AMI
# here i attached my version ->
# @judotens
# 1.1M

Share Counts

I have always struggled with getting all the various share buttons from Facebook, Twitter, Google Plus, Pinterest, etc to align correctly and to not look like a tacky explosion of buttons. Seeing a number of sites rolling their own share buttons with counts, for example The Next Web I decided to look into the various APIs on how to simply return the share count.

If you want to roll up all of these into a single jQuery plugin check out Sharrre

Many of these API calls and methods are undocumented, so anticipate that they will change in the future. Also, if you are planning on rolling these out across a site I would recommend creating a simple endpoint that periodically caches results from all of the APIs so that you are not overloading the services will requests.


# Quick and dirty demonstration of CVE-2014-0160 by Jared Stafford (
# The author disclaims copyright to this source code.
import sys
import struct
import socket
import time
import select
judotens /
Last active December 30, 2015 02:59
Find Indonesian song and lyrics from KapanLagi
from StringIO import StringIO
import gzip, BeautifulSoup, sys, urllib2, urllib
main_url = ""
def buka(url):
request = urllib2.Request(url)
request.add_header('Accept-encoding', 'gzip')
response = urllib2.urlopen(request)
if'Content-Encoding') == 'gzip':
buf = StringIO(
judotens /
Last active June 26, 2024 20:49
Scrape BitCoin private keys from
# scrape all leaked bitcoin private keys into a tab separated text
# <private key>\t<bitcoin_address>
# support autoresume. just add these line into your cron : * * * * bash
# results stored on keys.txt
if [ ! -f ]; then prev=`echo 0`; else prev=`cat`; fi;
if [ -z $1 ]; then akhir=`echo 10`; else akhir=`echo $1`; fi;


Twitter for iPhone

Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU

Twitter for Android

Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys

Twitter for Google TV

Consumer key: iAtYJ4HpUVfIUoNnif1DA

judotens /
Created November 29, 2013 08:11
Topsy search scraper
# scrape tweets from
import sys, urllib, urllib2, json, random
def search(query):
data = {'q': query, 'type': 'tweet', 'offset': 1, 'perpage': 1000, 'window': 'a', 'sort_method': "-date", 'apikey': '09C43A9B270A470B8EB8F2946A9369F3'}
url = "" + urllib.urlencode(data)
data = urllib2.urlopen(url)
o = json.loads(
res = o['response']