Skip to content

Instantly share code, notes, and snippets.

Ed Summers edsu

Block or report user

Report or block edsu

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
edsu / nameless-tweets.csv
Last active Jan 22, 2020
Nameless one's 22 tweets (so far). Obtained using their user id and twarc. twarc timeline 71996998 --format csv --output nameless-tweets.csv
View nameless-tweets.csv
We can make this file beautiful and searchable if this error is corrected: It looks like row 3 should actually have 37 columns, instead of 6. in line 2.
990140933021798403,,Sat Apr 28 08:09:29 +0000 2018,2018-04-28 08:09:29+00:00,,@,original,,,,,464,,,,und,,,0,,,,"<a href="""" rel=""nofollow"">Twitter Web Client</a>",71996998,Sun Sep 06 08:29:54 +0000 2009,False,,12,9406,1,22,,@,21,,,False
989953867935776770,,Fri Apr 27 19:46:09 +0000 2018,2018-04-27 19:46:09+00:00,,@jamie_gaskins You're one of the few I've seen who ha
edsu /
Last active Jan 14, 2020
Try to get replies to a particular set of tweets, recursively.
#!/usr/bin/env python
Twitter's API doesn't allow you to get replies to a particular tweet. Strange
but true. But you can use Twitter's Search API to search for tweets that are
directed at a particular user, and then search through the results to see if
any are replies to a given tweet. You probably are also interested in the
replies to any replies as well, so the process is recursive. The big caveat
here is that the search API only returns results for the last 7 days. So
edsu / results.txt
Created Jan 14, 2020
$ waybackprov --prefix --start 2018 --end 2020
View results.txt
172 108
98 49
98 49
98 49
59 53
59 53
49 49
49 49
18 11
15 11
edsu / irandisinfo.csv
Last active Jan 14, 2020
$ waybackprov --prefix --collapse --start 2018 --end 2020 --format csv
View irandisinfo.csv
timestamp status_code collections url archive_url
20190531191900 200 liveweb,webwidecrawl,web
20190604050154 200 ArchiveIt-Collection-8142,ArchiveIt-Partner-1028,archiveitpartners,archiveitdigitalcollection,web
20190604220739 200 liveweb,webwidecrawl,web
20190606044309 200 ArchiveIt-Collection-8142,ArchiveIt-Partner-1028,archiveitpartners,archiveitdigitalcollection,web
20190608074815 200 ArchiveIt-Collection-8142,ArchiveIt-Partner-1028,archiveitpartners,archiveitdigitalcollection,web
import requests
repos = requests.get('').json()
for repo in sorted(repos, key=lambda r: r['created_at']):
print(repo['name'], repo['created_at'])
View aoty
#!/usr/bin/env python3
# usage: aoty [year]
# This script collects all the albums of the year for Alf's awesome
# AOTY site and prints out the albums
# that appear on more than one Album of the Year list.
# You'll need beautifulsoup4 and requests to run this.
import json
def get_hashtags(filename):
fh = open(filename)
tweets = json.load(fh)
hashtags = set()
for tweet in tweets:
if tweet['date'].startswith('2019'):
for hashtag in tweet['hashtags']:
View Makefile
pandoc -F pwcite -F pandoc-citeproc -o article.pdf
pandoc --css style.css --standalone -F pwcite -F pandoc-citeproc -o article.html
import html
print(html.unescape("To be or not to be&#44; or not to be&#44; that is the question&#58;"))
View diffbot.json
"request": {
"pageUrl": "",
"api": "analyze",
"version": 3
"humanLanguage": "en",
"objects": [
"date": "Tue, 15 Oct 2019 00:00:00 GMT",
You can’t perform that action at this time.