Skip to content

Instantly share code, notes, and snippets.


Dave Raffensperger draffensperger

View GitHub Profile
View gist:54ff580bef6b4f8bd13697944121e490
// Here's how the hints work:
// First 20 characters are used as single-character hints, and then it
// switches to 2-character hints using the next characters. I wanted to make
// sure that the two character hints were easy to push.
// 000000000011111111112
// 012345678901234567890
Hints.characters = 'dkls;aie,cow.xpq/zvmrufj';
draffensperger / index.js
Last active Jan 1, 2020
Record Event to Google Sheets via Cloud Function
View index.js
// Records a event with a given timestamp to a google sheet
const { google } = require("googleapis");
exports.recordEvent = async (req, res) => {
if (!crypto.timingSafeEqual(Buffer.from(req.body.key),
Buffer.from(process.env.KEY)) {
draffensperger /
Last active Dec 7, 2018
Tsickle intersection type unknown this

We have some code that does a monkey-patch of a library function (maybe a bad design, but there may be other better examples of a similar pattern). The structure of the code looks about like this:

// This is code in a library
class Printer {
  value = 'b';

  print() {
    console.log(`value is ${this.value}`);
draffensperger / WORKSPACE
Created Dec 5, 2018
Minimal example of Go import conflicts for `go_proto_library`
workspace(name = "test_proto")
draffensperger / google_sheets_task_scheduler.js
Created Apr 19, 2016
Task scheduler code for Google sheets
View google_sheets_task_scheduler.js
function onInstall(e) {
function onOpen(e) {
.addItem('Recalculate schedule', 'use')
draffensperger /
Last active Oct 9, 2015
Why find_each(batch_size: 1) is helpful if objects reference lots of memory and operations are long.

In MPDX, the Google Contacts sync job takes a long time and the google accounts loop to sync each job could benefit from find_each(batch_size: 1). Basically it seems likes find_each pulls in the records in batches and then saves them in an array to enumerate through. Here's a comparison of the memory results with different batch_sizes that I did using a similar, but contrived MemHungry model. To setup, first do 6.times { MemHungry.create }.

Using the default batch_size of 1000 the memory at the end reflecs all 6 objects and the RAM they hold onto:

[1] pry(main)> MemHungry.all.find_each { |m| puts; m.eat_memory; }
  MemHungry Load (0.5ms)  SELECT  "mem_hungries".* FROM "mem_hungries"   ORDER BY "mem_hungries"."id" ASC LIMIT 1000
Memory before GC: 112.63671875
Memory before allocation: 159.90234375
Memory after allocation: 1759.90234375
draffensperger / job_duplicate_checker.rb
Created Mar 31, 2015
Sidekiq job duplicate checker
View job_duplicate_checker.rb
module JobDuplicateChecker
def duplicate_job?(*args)
job_in_retries?(args) || older_job_running?(args)
def older_job_running?(args)
workers =
self_worker = workers.find { |_, _, work| work['payload']['jid'] == jid }
draffensperger /
Last active Aug 29, 2015 Github verification Gits

Keybase proof

I hereby claim:

  • I am draffensperger on github.
  • I am draffensperger ( on keybase.
  • I have a public key whose fingerprint is 11AD 4270 6B18 7B21 BDCF 7A62 66E0 7480 D06A 9FE6

To claim this, I am signing this object:

draffensperger / s3_deply_snippet.rb
Created Jan 26, 2014
S3 Deployment Code (initial start)
View s3_deply_snippet.rb
def each_file()
Dir.glob("**/*") do |file|
yield file unless file
def file_md5_name(file)
ext = File.extname file
digest = Digest::MD5.base64digest file
draffensperger / Rakefile
Last active Jan 2, 2016
The Octopress Rakefile I use for my blog at It serves CSS Source Maps under the :preview task.
View Rakefile
require "rubygems"
require "bundler/setup"
require "stringex"
## -- Rsync Deploy config -- ##
# Be sure your public key is listed in your server's ~/.ssh/authorized_keys file
ssh_user = ""
ssh_port = "22"
document_root = "~/"
rsync_delete = false