Skip to content

Instantly share code, notes, and snippets.

Steve Baskauf baskaufs

Block or report user

Report or block baskaufs

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
baskaufs / file.txt
Created Nov 20, 2019
A file for class
View file.txt
This is my file
baskaufs / vandy_schools.json
Created Oct 29, 2019
Schools and colleges of Vanderbilt University and their Wikidata IDs
View vandy_schools.json
"label":"Blair School of Music",
"label":"College of Arts & Science",
View gis.txt
Creating & editing a new vector layer
Code snippets follow along
vl = QgsVectorLayer("Point", "temp", "memory")
from qgis.PyQt.QtCore import QVariant
pr = vl.dataProvider()
pr.addAttributes([QgsField("name", QVariant.String),
baskaufs /
Created Sep 9, 2019
Python script to use Wikidata Query Service to find Vanderbilt employees who have created works
import requests
def getWikidata(query):
endpointUrl = ''
# The endpoint defaults to returning XML, so the Accept: header is required
r = requests.get(endpointUrl, params={'query' : query}, headers={'Accept' : 'application/sparql-results+json'})
data = r.json()
statements = data['results']['bindings']
return statements
baskaufs / query.url
Created Sep 8, 2019
Query Microsoft Academic Graph ( for papers by Gary Sulikowski
View query.url
baskaufs /
Created Feb 19, 2019
convert tape codes into database codes
import csv
monthDict = {'jan':'01', 'feb':'02', 'mar':'03', 'apr':'04', 'may':'05', 'jun':'06', 'jul':'07', 'aug':'08', 'sep':'09', 'oct':'10', 'nov':'11', 'dec':'12'}
def convertTapeCode(tape):
# has problems when the day doesn't have two digits
year = tape[7:11]
monthAbbrev = tape[0:3]
monthNumber = monthDict[monthAbbrev.lower()]
day = tape[4:6]
network = tape[12:15]
baskaufs /
Last active Jan 21, 2019
Python script to scrape
import requests # best library to manage HTTP transactions
import csv # library to read/write/parse CSV files
from bs4 import BeautifulSoup # web-scraping library
acceptMime = 'text/html'
searchTerm = 'pollution'
baseUri = ''+searchTerm
r = requests.get(baseUri, headers={'Accept' : acceptMime})
View gist:78ba7892c8e4a8525be156944756fa22
prefix dcterms: <>
prefix skos: <>
PREFIX lawd: <>
PREFIX geo: <>
{SELECT DISTINCT (COUNT(?place) AS ?count1)
graph <http://pelagios>
?place a lawd:Place.
View 423025-brief.ttl
<> a <>;
dcterms:title "Roma";
dcterms:description "The capital of the Roman Republic and Empire.";
skos:altLabel "Roma"@la,
skos:inScheme <>;
dcterms:bibliographicCitation "",
"Alföldi 1962 ",
"Carandini and Carafa 2017 ",
View alb-brief.ttl
<> a skos:Concept ;
skos:prefLabel "Albanisch"@de,
"albanais"@fr ;
skos:altLabel "Albanian",
"albanais" ;
skos:inScheme <> ;
skos:related <> ;
skos:exactMatch <>,
You can’t perform that action at this time.