Skip to content

Instantly share code, notes, and snippets.

#!/bin/sh
#
# update_storyboard_strings.sh - automatically extract translatable strings from storyboards and update strings files
# Based on http://forums.macrumors.com/showpost.php?p=16060008&postcount=4 by mikezang
storyboardExt=".storyboard"
stringsExt=".strings"
newStringsExt=".strings.new"
oldStringsExt=".strings.old"
localeDirExt=".lproj"
@flibbertigibbet
flibbertigibbet / FeedFetcherMA.py
Last active August 29, 2015 14:10
Fetch updated GTFS feeds for Massachusetts
#!/usr/bin/python
import requests, os, pickle, datetime, zipfile, subprocess, csv
class FeedFetcher():
def __init__(self, ddir=os.getcwd()):
self.ddir = ddir
self.tc = {} # time checks for GTFS fetches
self.new_use = [] # list of new feeds successfully downloaded and validated
#!/usr/bin/env python
""" Parse JSON real-time bus data from SEPTA:
http://www3.septa.org/hackathon/
"""
from datetime import datetime, timedelta
import json
import requests
import sys
@flibbertigibbet
flibbertigibbet / google_viz.py
Last active December 17, 2015 16:19
Helper functions to use the Google Visualization API in Django. Module stores queries to use and creates a JSon representation of a Google Visualization DataTable for each by inspecting the query's model. Looks up any choices defined for the columns.
import gviz_api
from django.db.models.query import ValuesQuerySet
# mapping of model field types to google vizualization column types (all not
# in list are 'string' type) valid google viz data types are: 'string'
# 'number' 'boolean' 'date' 'datetime' 'timeofday'
fieldmap = {'DateField':'date','DateTimeField':'datetime','BooleanField':'boolean',
'IntegerField':'number','DecimalField':'number','BigIntegerField':'number',
'FloatField':'number','TimeField':'timeofday','NullBooleanField':'boolean'}
@flibbertigibbet
flibbertigibbet / ned_fetcher.py
Last active January 1, 2016 04:29
Downloads GFloat National Elevation Dataset data from USGS, converts to GTiff, then merges to a single tif. Expects a file 'ned_links.html' in the same directory that has the HTML contents of the email from USGS containing the download links for the GFloat zip files. Depends on BeautifulSoup and gdal; expects to be on a unix-y system.
#!/usr/bin/env python
from bs4 import BeautifulSoup
import requests, os, subprocess
from time import sleep
download_dir = os.getcwd()
soup = BeautifulSoup(open('ned_links.html','rb'))
of = open('just_ned_links.txt', 'wb')
@flibbertigibbet
flibbertigibbet / TorqueTalk.md
Last active April 9, 2016 00:46
How to Use Torque Outside of a Full CartoDB Setup

What is Torque?

Torque is an open-source JavaScript library from CartoDB for time series map visualization.

How are Torque Visualizations Created?

  • The simplest way to create a Torque time-lapse animation is to upload the data to a CartoDB account,

Chrome Developer Tools at I/O 2016

About the I/O presentation and demo app

The Chrome DevTools team gave this presentation at Google I/O 2016.

The code for the exact demo app used in their presentation apparently isn't public, but was based on the progressive web app CodeLabs demo, the code for which is here, in the 'final' directory.

Progressive web apps have a responsive design and define a manifest.json, which gives URLs for launcher icons and the start URL for the app. With the manifest.json in place, users can add the web app to their device home screen and launch it directly. Another common feature of progressive web apps is their use of service workers to cache resources, which allows them to load quickly, cut down on network use, and potentially work offline.

@flibbertigibbet
flibbertigibbet / index.html
Created December 10, 2016 15:41 — forked from DaisyDream/index.html
Portfolio
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" />
<body class="container-fluid">
<div class="topnav" id="topnav">
<ul>
<li><a href="#About">About</a></li>
<li><a href="#Portfolio">Portfolio</a></li>
<li><a href="#Contact">Contact</a></li>
</ul>
</div>
@flibbertigibbet
flibbertigibbet / FeedFetcher.py
Last active June 14, 2017 02:25
Checks for new GTFS feeds, then downloads and validates them. Fetches feeds for SEPTA, NYC MTA, NJ TRANSIT, CTTRANSIT, Delaware First State, NJ/NY PATH, and PATCO. Dependencies: Python requests, BeautifulSoup, git, and Google's transit feed validator. NJ TRANSIT developer login credentials required to download from that agency. Cannot check for …
#!/usr/bin/python
import requests, os, pickle, datetime, zipfile, subprocess, csv
from bs4 import BeautifulSoup
class FeedFetcher():
def __init__(self, ddir=os.getcwd(), get_nj=True, nj_username='', nj_pass=''):
self.ddir = ddir
self.get_nj = get_nj # whether to fetch from NJ TRANSIT or not
self.tc = {} # time checks for GTFS fetches
@flibbertigibbet
flibbertigibbet / census_fetcher.py
Created December 23, 2013 06:20
Download TIGER/Line data from the Census FTP server based on state FIPS code. Here, pulls EDGE data for four states.
#!/usr/bin/env python
from os import walk
from ftplib import FTP
from time import sleep
# 10 - DE
# 34 - NJ
# 36 - NY
# 42 - PA