View cover.php
<?php
// this proxies ByWater Solutions' "COCE" cover image service
// which does not work over HTTPS, so fill in our libraries.cca.edu
// server as our COCE server & it intercepts requests, sending along
// data from ByWater's COCE server
// we're sending JS
header( 'Content-Type:application/javascript; charset=utf-8' );
// requests look like
View remove-leading-zeroes.fish
#!/usr/bin/env fish
# remove leading zeroes from JPG file names
# e.g. page001.jpg => page1.jpg
set start (pwd)
for dir in (ls)
echo "About to rename files in $dir"
# optional, makes me less afraid when I step through one folder at a time
read
View sum-majors.py
#!/usr/bin/env python
# usage:
# sum-majors.py "LI - Library students per term.csv" > "YEAR majors total.csv"
import csv
import fileinput
import sys
majors = csv.DictReader(fileinput.input(mode='rb'))
# mapping of degree codes to majors will change over time
# as will the "totals" dict below listing our majors
View randpw.js
#!/usr/bin/env node
// usage:
// > randpw
// IKS1L2H1AMOx
// > randpw --length 22
// IKS1L2H1AMOxBs4d8qxDXY
// > randpw | pbcopy # pipe to Mac clipboard
var chance = new require('chance')()
var args = require('minimist')(process.argv.slice(2))
var pool = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890'
View wp-edits.js
// see also: https://gist.github.com/phette23/5575987
// used to count edits at California College of the Arts
// during Art+Feminism edit-a-thon on March 7, 2015
var uns = [
"Phette23",
"Circa73",
"Flyingpanther",
"Tericlaude",
"Berylbev",
"Cd heaven",
View unt-json-to-equella-taxonomy.js
// take http://digital2.library.unt.edu/vocabularies/agent-qualifiers/json/
// and insert into an EQUELLA taxonomy, preserving URL & definition info
// EQUELLA taxo format is CSV-like:
// term1/term/term2,key,value,key2,value2…
// can then upload with their included Python scripts or write your own
var fs = require('fs')
var data = JSON.parse(fs.readFileSync('agent-qualifiers.json'))
var terms = data.terms
var getDescription = function(term) {
View a-unix-use-case.mdown

Almost immediately after declaring a hiatus seems like a great time for a blog post.

Inspired by nina de jesus and Ruth Tillman's libtech level up project, here's something on the value of command-line text processing. Some of these common UNIX tools that have been around since practically the 1980s are great for the sort of data wrangling that many librarians find themselves doing, whether their responsibilities lie with systems, the web, metadata, or other areas. But the command prompt has a learning curve and if you already use text editor tools to accomplish some tasks, it might be tough to see why you should invest in learning. Here's one case I've found.

Scenario: our digital repository needs to maintain several vocabularies of faculty who teach in different departments. That information is, of course, within a siloed vendor product that has no viable APIs. I'm only able to export CSVs that looks like this:

"Namerer, Name","username"

"Othern

View wrapper.ftl
<#-- NOTE this style should be removed for home page portlet
also should really use an ID rather than hide all portlet headings -->
<style>
/* hide header */
.portlet_freemarker_content .box_title_wrapper h3 {
display: none !important;
}
</style>
<#-- these role IDs will need to be researched & changed -->
<#if user.hasRole('490b1b93-10cd-b8fa-3291-93c357efe57b')>
View tif2jpg.fish
for f in *.tif
convert $f (basename -s .tif $f).jpg
end
View csv2patron-marc.py
#!/usr/bin/env python
# note final, more robust version at https://github.com/cca/millennium_patron_import
import sys
import csv # https://docs.python.org/2/library/csv.html
# PCODE mappings, etc.
from mapping import *
import datetime
import re