Skip to content

@johntyree /
Last active

Embed URL


Subversion checkout URL

You can clone with
Download ZIP
Make one large blocklist from the bluetack lists on
#!/usr/bin/env sh
# Download lists, unpack and filter, write to stdout
curl -s \
| sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p" \
| xargs wget -O - \
| gunzip \
| egrep -v '^#'

This can easily be added to cron also:
0 3 * * 0 curl -s | sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p" | xargs wget -O - | gunzip | egrep -v '^#' > ~/Library/Application\ Support/Transmission/blocklists/generated.txt


Hi! I'm getting these errors trying to execute from the Terminal:
"xargs: wget: No such file or directory
gzip: stdin: unexpected end of file"

Am I doing anything wrong? Thanks!


replace wget -O - by curl in the script.


Hi and thanks for script. I run from terminal (i have replace wget -0 - with curl) and all is:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
And then error:
gzip: stdin: unexpected end of file


Replace wget -O - with curl -Ls


Hi @jamesstout,
Would you please post the entire terminal command with your correction?
When I run:

sudo 0 3 * * 0 curl -s | sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p" | xargs curl -Ls | gunzip | egrep -v    '^#' > ~/Library/Application\ Support/Transmission/blocklists/generated.txt

I get:

-bash: 0: command not found


The "0 3 * * 0 ..." is a crontab entry. Just type "curl -s ...."


@ArtemGordinsky you're getting -bash: 0: command not found because 0 isn't a command. It's a crontab entry, as @prehensilecode mentioned.


Updated for Mac:

curl -s | sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p" | sed "s/\&/\&/g" | sed "s/http/\"http/g" | sed "s/gz/gz\"/g" | xargs curl -L | gunzip | egrep -v '^#' > ~/Library/Application\ Support/Transmission/blocklists/generated.txt.bin

The blocklist URL has changed to HTTPS. The updated url on line 1 is (as above but it wasn't updated in the download). Thanks for this great script!

@GithubIsAgeist is bigoted hate website.


@kilolima updated.


Hi folks. I've analysed the code for @fortran01's mac version (thanks v. much @fortran01)
and posted a plain english translation of what each bit does below.
Hope it's accurate but if I have made any errors please correct me! :-)
Hope this helps people tweak the code to do other similar stuff

For more clarity as to what's going on, I've put each piped segment of the command on seperate lines,
but remember it's all one big line.

 curl -s
 | sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p"
 | sed "s/\&/\&/g"
 | sed "s/http/\"http/g"
 | sed "s/gz/gz\"/g"
 | xargs curl -L
 | gunzip
 | egrep -v '^#' > ~/Library/Application\ Support/Transmission/blocklists/generated.txt.bin

Plain english Explanation of each bit of the command does:

grab the webpage "" in silent mode (no progress bar or error messages)

search each line of this webpage, looking for lines containing text of the form
chop out and dump the first bit ( (anything)value= ) from each of the lines you find

in the resultant lines, change all occurrences of &amp to &

in the resultant lines change all occurrences of the string http to "http

in the resultant lines, change all occurrences of the string gz to gz"

feed the resultant lines one by one to the curl command (-L means curl will automatically redo the grab if server says any file resource has moved)

feed each file downloaded by curl to gunzip program (uncompress it)

write only the lines from each file that don't start with a # (i.e. that are not comments) into the file
"~/Library/Application Support/Transmission/blocklists/generated.txt.bin"

All this ultimately results in the following lines being fed one by one by xargs to the curl command:


This should help clarify exactly what pattern is being searched for by sed and what the sed filtering actually does
"&amp" becomes "&"
inverted commas placed before each http and after each gz

You could of course feed these lines manually to curl if you just want to grab individual zipped blocklists etc.

If doing this, don't forget to filter out the comment lines with egrep and write to a .bin file for transmission :-)


aww.. they changed it again :/


I've just wrote a simple Haskell program to sort and merge IP ranges (overlapping and adjacent IP ranges are fused, empty or commented lines are removed) :

Use stdin and stdout with gz format :
./fuseblkl < inList.p2p.gz > outList.p2p.gz


I wrote a script that downloads 50 different Blocklists and performs Duplication checks and other Repeated Offender Queries to compact the IP Reputation from a Threat Source of over 700,000 IPs to approx 2-300,000 IPs. Its primarily for pfSense Firewall Blocking but can be adapted for other needs. Any comments appreciated via email.


Here's another variant: it downloads only lists with a rating of >= 4. You could remove grep -A1 'star_[45]' to get all lists.

curl -sL '' | egrep -A1 'star_[45]' | egrep -o '[a-z]{20}' | sort -u | while read -r blocklist; do curl -sL "${blocklist}&fileformat=p2p&archiveformat=gz" | gunzip -q > ~/Library/Application\ Support/Transmission/blocklists/$blocklist; done

When transmission starts, it scans this directory for files not ending in ".bin" and tries to parse them.


[ Edited 08/17 to no longer use a temp file]

I re-wrote this to work with their current site and made the resulting daily-updated list available at

(note that this requires hxwls which should be in the 'html-xml-utils' package or similar in your distro)


for url in $(curl -s \
| hxwls \
| grep -v png \
| grep 'list=' \
| sed 's|/list.php?list=||g' \
| sed 's|^||g' \
| sed 's|$|\&fileformat=p2p\&archiveformat=gz|g'); do wget --no-verbose "${url}" -O - | gunzip |egrep -v '^#' >> list; done

gzip list
echo "DONE"
ls -lah list.gz


This one is for uTorrent on MacOSX, if anyone needs:

DIR="~/Library/Application Support/uTorrent"
LIST_COUNT=$(wget -q -O - | sed -n "s/.*value='\(http:.*list=.*\)'.*/\1/p" | wc -l )
mv ipfilter.dat ipfilter.`date +%Y%m%d_%s`.dat 2>/dev/null
echo -n -e "..."
wget -q -O - | sed -n "s/.*value='\(http:.*list=.*\)'.*/\1/p" | while read LIST_URL; do
    for ((;STARS_LEFT>0; STARS_LEFT--)); do echo -n -e "*"; done
    URL=$LIST_URL sh -c 'wget -q -O - "$URL" | gunzip -c | egrep -v "^#" | cut -d":" -f2 | egrep -v "^$" ' >> "${DIR}/ipfilter.dat" 2>/dev/null

@ip2k nice script, you just have one typo:

for url in $(cat bl); do

should be

for url in $(cat bls); do


@enricobacis thanks for pointing that out :) I fixed the script to not use a temp file at all now, and a daily-updated list is at


@ip2k Just found out about the this list. Once I add the list you generated to the Transmission, it will get automatically updated as frequently are yours. Meaning I will not have to do anything except add the link to Transmission. Is that right?


ip2k, is your list still maintained?


Hey, looks like a script pasting party!


When I paste into transmission I only get 389,000+ IPs.. I thought it was supposed to be something like 2 billion. Is something wring with this list? or what?


@negativeions There are 389,000 ranges, or, as Transmission calls it, rules. 389,000 ranges = two billion individual IPs ;).



Do you have any idea why all torrents from (IP:, found from are being blocked by your list? I downloaded the list and opened it in Notepad++; however, I could not find either the URL and IP address when I searched for them, so I'm not certain why they would be blocked.

Any ideas you might have on how to fix this issue?

@ip2k is updated daily via crontab; yes.


@koesherbacon is in charge of which ip's are blocked. It might be that the ip address is within an ip range that is blocked.


@ip2k I'm getting literally thousands of hits a day on this and haven't looked at it in at least a year. I'm considering just 301 redirecting to yours. Are you OK with that?


@ip2k, so do I open it with a txt editor and copy into the ipfilter.dat file? Looks like Dave's Ipfilter hasn't been updated in almost a month. Miss the normal ipfilter updater which no longer works.


@freed00m is not DEAD ... it is just returning an empty list due to some URL schema changes on the original website.

To fix it just run the ip2k script changing:

| sed 's|/list.php?list=||g' \


| sed 's|/list?list=||g' \

and you will get a long blocklist. :)


So, I have some other OS that I will not name. I was just downloading the list.gz file, unzipping, and importing to my bittorrent client. I assume there is a better way to do this? Also, just downloading the list right now results in an empty file... Please assist!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.