I hereby claim:
- I am duner on github.
- I am duner (https://keybase.io/duner) on keybase.
- I have a public key whose fingerprint is B1DA 17FC 5CC5 9826 94C1 4F89 D578 B113 4C81 BAFA
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
(project)duner:project duner$ psql -d projectdb -c 'CREATE EXTENSION postgis;' | |
ERROR: could not open extension control file "/usr/local/Cellar/postgresql/9.3.5/share/postgresql/extension/postgis.control": No such file or directory |
import numpy | |
import sys | |
import matplotlib.pyplot as plt | |
import matplotlib.tri as tri | |
import matplotlib.cm as cm | |
def create_chart(data): | |
""" | |
data should be an list of lists in the form [(x,y,z), a] | |
""" |
oauthlib==1.1.2 | |
requests==2.10.0 | |
requests-oauthlib==0.6.2 | |
six==1.10.0 | |
tweepy==3.5.0 |
Today at LunchConf we watched this talk by Bret Victor on "Media for Thinking the Unthinkable". Here are some additional links that came up in our discussion after watching the talk.
If you download your personal Twitter archive, you don't quite get the data as JSON, but as a series of .js
files, one for each month (there are meant to replicate the Twitter API respones for the front-end part of the downloadable archive.)
But if you want to be able to use the data in those files, which is far richer than the CSV data, for some analysis or app just run this script.
Run sh ./twitter-archive-to-json.sh
in the same directory as the /tweets
folder that comes with the archive download, and you'll get two files:
tweets.json
— a JSON list of the objectstweets_dict.json
— a JSON dictionary where each Tweet's key is its id_str
You'll also get a /json-tweets
directory which has the individual JSON files for each month of tweets.