Skip to content

Instantly share code, notes, and snippets.

Avatar

James Baker drjwbaker

View GitHub Profile
@drjwbaker
drjwbaker / tmux.md
Created Dec 8, 2015 — forked from andreyvit/tmux.md
tmux cheatsheet
View tmux.md

tmux cheat sheet

(C-x means ctrl+x, M-x means alt+x)

Prefix key

The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf:

# remap prefix to Control + a
@drjwbaker
drjwbaker / OpenRefine Nominatim Geocode
Last active Nov 26, 2015 — forked from pdbartsch/OpenRefine Nominatim Geocode
open-refine geocoding using OpenStreetMap's Nominatim service. (previously called google-refine)
View OpenRefine Nominatim Geocode
Step One - Starting with a single address field
Edit Column > Add Column by Fetching URLs
Nominatim has a limit of 1 geocode per second so make sure to set the throttle delay to greater than 1000 milliseconds
Fetch URL based on column (quotes needed):
"http://nominatim.openstreetmap.org/search?format=json&email=[YOUR_EMAIL_HERE].com&app=google-refine&q=" + escape(value, 'url')
------------------------------------------------------------------------------------
Step Two - Extract lat/lon from newly created JSON blobs
@drjwbaker
drjwbaker / splitPerYear
Last active Aug 29, 2015 — forked from melvinwevers/splitPerYear
SplitperYear
View splitPerYear
src_path = "" #here you need to input the directory that contains the file
main_file = "" #here you need to input the name of the file
import csv
import collections
import pprint
with open(main_file, "rb") as fp:
root = csv.reader(fp, delimiter='\t')
result = collections.defaultdict(list)
for row in root:
View baker.py
#!/usr/bin/env python
import re
output = []
# use a "with" block to automatically close I/O streams
with open('mylist.txt') as word_list:
# read the contents of mylist.txt into the words list using list comprehension
words = [word.strip().lower() for word in word_list]
View gist:713a8bfc5afb91017503
PREFIX crm: <http://erlangen-crm.org/current/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX thes: <http://collection.britishmuseum.org/id/thesauri/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX bmo: <http://collection.britishmuseum.org/id/ontology/>
PREFIX thesIdentifier: <http://collection.britishmuseum.org/id/>
SELECT DISTINCT ?id (GROUP_CONCAT(?title; SEPARATOR = "|") as ?titles) (GROUP_CONCAT(?name; SEPARATOR = "|") as ?names) (GROUP_CONCAT(?desc; SEPARATOR = "|") as ?descs) (GROUP_CONCAT(?date; SEPARATOR = "|") as ?dates)
{
?object crm:P70i_is_documented_in <http://collection.britishmuseum.org/id/bibliography/294> .
OPTIONAL {
View add_numbers.py
import csv
import json
INPUTFILE = "History_Journal_Articles_KW.csv"
OUTPUTFILE = INPUTFILE[:-4] + "_numbered.csv"
in_file = open(INPUTFILE, "r") # "r" == Open file for reading
out_file = open(OUTPUTFILE, "w") # "w" for writing