Skip to content

Instantly share code, notes, and snippets.

View yshalsager's full-sized avatar

Youssif Shaaban Alsager yshalsager

View GitHub Profile
@yshalsager
yshalsager / anidl.sh
Last active September 2, 2018 14:09
anidl.tk Downloader
#!/bin/bash
# anidl.tk Downloader
# by yshalsager
url=$2
quality=$3
function fetch_page() {
curl -s $url | tr '>' '\n' | grep $quality | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sed -n '/folder/!p' >> links.txt
}
@yshalsager
yshalsager / extractsub.sh
Created August 14, 2018 20:13
ani-dl SubExtractor
for file in *.mkv; do
ep=$(echo "$file" | grep -Po ' - [0-9]*' | cut -d ' ' -f3 | head -n1)
ffmpeg -i "$file" $ep.srt
done
@yshalsager
yshalsager / TWRPBuilder.md
Last active November 6, 2018 19:07
TWRPBuilder Privacy Policy

TWRPBuilder

Privacy Policy

Effective date: September 03, 2018

TWRPBuilder ("us", "we", or "our") operates the website and the TWRPBuilder mobile application (hereinafter referred to as the "Service").

This page informs you of our policies regarding the collection, use, and disclosure of personal data when you use our Service and the choices you have associated with that data. We use your data to provide and improve the Service. By using the Service, you agree to the collection and use of information in accordance with this policy. Unless otherwise defined in this Privacy Policy, the terms used in this Privacy Policy have the same meanings as in our Terms and Conditions.

@yshalsager
yshalsager / GithubReleaseCount.sh
Created November 6, 2018 19:07
This script counts the total releases downloads from all repositories in a certain organization
curl -s https://api.github.com/orgs/$1/repos | egrep '"name"' | cut -d '"' -f4 > repos
cat repos | while read repo; do
curl -s https://api.github.com/repos/$1/$repo/releases | egrep 'download_count' | cut '-d:' -f 2 | sed 's/,/+/' | xargs echo | xargs -I N echo N 0 | bc >> counts
done
cat counts | while read repo; do
perl -nle '$sum += $_ } END { print $sum'
done
rm repos counts 2> /dev/null
<h1>Privacy Policy</h1>
<p>Effective date: January 1, 2019</p>
<p>Xiaomi Firmware Updater ("us", "we", or "our") operates the https://Xiaomifirmwareupdater.github.io website (hereinafter referred to as the "Service").</p>
<p>This page informs you of our policies regarding the collection, use, and disclosure of personal data when you use our Service and the choices you have associated with that data.</p>
<p>We use your data to provide and improve the Service. By using the Service, you agree to the collection and use of information in accordance with this policy. Unless otherwise defined in this Privacy Policy, the terms used in this Privacy Policy have the same meanings as in our Terms and Conditions, accessible from https://Xiaomifirmwareupdater.github.io</p>
<h2>Information Collection And Use</h2>
<p>We collect several different types of information for various purposes to provide and improve our Service to you.</p>
<h3>Types of Data Collected</h3>
<h4>Personal Data</h4>
<p>While using our Servic
Verifying my Blockstack ID is secured with the address 12TKkXSM5XoGRNuqt21c9CvFMfQwRXFD9b https://explorer.blockstack.org/address/12TKkXSM5XoGRNuqt21c9CvFMfQwRXFD9b
### Keybase proof
I hereby claim:
* I am yshalsager on github.
* I am yshalsager (https://keybase.io/yshalsager) on keybase.
* I have a public key ASBzAw86UrjzbEdWDjWwRPQI7yeao0X4oU-S11iVhWhrlAo
To claim this, I am signing this object:
#!/usr/bin/env python3.7
"""
A script that calcuate sum of github organization's repositories stargazers
"""
from requests import get
ORG = "XiaomiFirmwareUpdater"
START_PAGE = 1
END_PAGE = 2
@yshalsager
yshalsager / scraper.py
Created May 15, 2020 22:41
Webscraper that gets Quran Ayah translation from http://corpus.quran.com/translation.jsp
#!/usr/bin/env python3
from requests import get
from bs4 import BeautifulSoup
chapter = input("Enter Sura number\n")
url = f"http://corpus.quran.com/translation.jsp?chapter={chapter}"
page = BeautifulSoup(get(f'{url}&verse=1').content, "html.parser")
verses = int(page.select_one("#verseList > option:last-of-type")['value'])
@yshalsager
yshalsager / scraper.py
Created May 15, 2020 22:43
princeofwales speeches scraper
from requests import get
from bs4 import BeautifulSoup
site = "https://www.princeofwales.gov.uk"
url = f"{site}/biographies/hrh-prince-wales/speeches?title=&mrfs=All&date_from=&date_to=&page="
for page in range(0, 77): # hardcoded page number
print(page)
speeches = BeautifulSoup(get(f"{url}{page}").content, "html.parser").select("div.views-row > div:nth-child(1) > h2:nth-child(1) > a:nth-child(1)")