Skip to content

Instantly share code, notes, and snippets.

@ohhdemgirls
ohhdemgirls / yify.md
Created November 3, 2017 20:46 — forked from kuntau/yify.md
YIFY's Quality Encoding

For those that want to keep the YTS going (No, IDGAF about people that don't care for YTS quality) get HandbrakeCLI https://handbrake.fr/downloads... and use the following settings:

user@user:~$HandBrakeCLI -i /file/input.mp4 -o /file/out.mp4 -E fdk_faac -B 96k -6 stereo -R 44.1 -e x264 -q 27 -x cabac=1:ref=5:analyse=0x133:me=umh:subme=9:chroma-me=1:deadzone-inter=21:deadzone-intra=11:b-adapt=2:rc-lookahead=60:vbv-maxrate=10000:vbv-bufsize=10000:qpmax=69:bframes=5:b-adapt=2:direct=auto:crf-max=51:weightp=2:merange=24:chroma-qp-offset=-1:sync-lookahead=2:psy-rd=1.00,0.15:trellis=2:min-keyint=23:partitions=all

Reason to use CLI over GTK has to do with lack of support for advanced settings for Handbrake GTK

** Don't Re-encode already shitty encodes...get good source!**

#!/usr/bin/python
# ------------------------------
# Script to parse reddit json
# Usage $cat *.json | redditJSON.py
# ------------------------------
import sys
reload(sys)
sys.setdefaultencoding('utf8')
import json
#!/bin/bash
set -e
set -u
yt_list="$1"
jbs="$2"
if [ ! -f "$yt_list" ]; then
echo "File $yt_list is missing"
exit 1
#!/bin/bash
# eroshare.com/igowild.com downloader
# Usage ./eroshare.sh https://eroshare.com/jhocqntr
for LINK in "$@" ;
# download albums & videos
do if [ $(echo "$LINK" | grep "eroshare.com/[A-Za-z0-9]\|igowild.com/[A-Za-z0-9]\|") ]
then URL=$(echo "$LINK" | cut -d'/' -f4) ;
TITLE=$(curl -Ls "$LINK" | grep "<title>.*</title>\|twitter:title\|og:title" | cut -d'"' -f4 | cut -d'>' -f2 | cut -d'<' -f1 | uniq | paste -sd ' ' | sed 's|^|-|') ;
USER=$(curl -Ls "$LINK" | grep -i "avatar avatar-small" | cut -d'"' -f2 | sed "s|\/u\/||g;s|\ ||g") ;
@ohhdemgirls
ohhdemgirls / rename.sh
Last active January 4, 2016 10:49
Rename html files based on title tag
#!/bin/bash
# Rename the ouput html file from redditPostArchiver with the reddit thread title.
# https://github.com/sJohnsonStoever/redditPostArchiver
for f in *.html;
do
title=$( awk 'BEGIN{IGNORECASE=1;FS="<title>|</title>";RS=EOF} {print $2}' "$f" )
mv -i "$f" "${title//[^a-zA-Z0-9\._\- ]}_$f"

Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.

Discussion on reddit.

Python 2.x

$ python -m SimpleHTTPServer 8000
@ohhdemgirls
ohhdemgirls / mediacru.sh
Last active January 2, 2016 13:09
mediacru.sh rehoster
#!/bin/bash
#Usage: ./mediacru.sh $(cat urls.txt)
echo -e "\e[91mMediaCrush Rehoster\e[0m"
for f in "$@"; do
echo -ne "\e[34mUploading:\e[0m" $f
url="https://mediacru.sh/api/upload/url"
out=$(curl -s -F "url=$f" "$url")
hash=$(echo $out |sed -nre 's#.*"hash": "([^"]+)".*#\1#p')
#!/usr/bin/env ruby
# gem install tumblr_client
# use 'tumblr' command to generate security credentials
# fill in security info from the generated file at ~/.tumblr
# fill in the source directory and destintion directory
# fill in blogname
# script will upload a file then move it to the destination directory
require 'rubygems'
require 'tumblr_client'
#!/bin/bash
#Usage: ./imgrush.sh $(cat cats.txt)
echo -e "\e[91mImgrush all the cats!!!\e[0m"
for f in "$@"; do
echo -ne "\e[34mUploading:\e[0m" $f
url="https://imgrush.com/api/upload/url"
out=$(curl -s -F "url=$f" "$url")
hash=$(echo $out |sed -nre 's#.*"hash": "([^"]+)".*#\1#p')
@ohhdemgirls
ohhdemgirls / r2db.py
Last active August 29, 2015 14:15 — forked from nikolak/r2db.py
import sqlite3
import time
import json
import urllib2
def get_submissions():
url="http://www.reddit.com/r/all/new/.json" #URL of the page that we want to fetch
headers = { 'User-Agent' : 'fetching new submissions script' } # our script "identifier"
req = urllib2.Request(url, None, headers) # We create new request here to open the url with above set headers
data = urllib2.urlopen(req).read() # Open url and make data variable equal whatever we get