Skip to content

Instantly share code, notes, and snippets.

Collin Donahue-Oponski colllin

Block or report user

Report or block colllin

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
colllin /
Last active Oct 12, 2019
FaunaDB User Token Expiration (for ABAC)

Auth0 + FaunaDB ABAC integration: How to expire Fauna user secrets.

Fauna doesn't yet provide expiration/TTL for ABAC tokens, so we need to implement it ourselves.

What's in the box?

3 javascript functions, each of which can be imported into your project or run from the command-line using node path/to/script.js arg1 arg2 ... argN:

  1. deploy-schema.js: a javascript function for creating supporting collections and indexes in your Fauna database.
colllin /
Last active Oct 2, 2019
Auth0 + FaunaDB integration strategy



At the very least, we need two pieces of functionality:

  1. Create a user document in Fauna to represent each Auth0 user.
  2. Exchange an Auth0 JWT for a FaunaDB user secret.
colllin / Find Learning Rate.ipynb
Last active May 22, 2019
Learning Rate Finder in PyTorch
View Find Learning Rate.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
colllin /
Last active May 15, 2019
Utility for logging system profile to tensorboardx during pytorch training.
import torch
import psutil
import numpy as np
def log_profile(summaryWriter, step, scope='profile', cpu=True, mem=True, gpu=torch.cuda.is_available(), disk=['read_time', 'write_time'], network=False):
if cpu:
cpu_usage = np.array(psutil.cpu_percent(percpu=True))
summaryWriter.add_scalars(f'{scope}/cpu/percent', {
'min': cpu_usage.min(),
'avg': cpu_usage.mean(),
colllin / Install NVIDIA Driver and
Created Mar 24, 2019 — forked from wangruohui/Install NVIDIA Driver and
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
View Install NVIDIA Driver and
colllin /
Last active Mar 14, 2019
Example startup script / boot script "user data" for running machine learning experiments on EC2 Spot Instances with git & dvc


  • Write your training script so that it can be killed, and then automatically resumes from the beginning of the current epoch when restarted. (See for an example training loop incorporating these recommendations.)
    • Save checkpoints at every epoch... (See for save_training_state helper function.)
      • model(s)
      • optimizer(s)
      • any hyperparameter schedules — I usually write the epoch number to a JSON file and compute the hyperparameter schedules as a function of the epoch number.
    • At the beginning of training, check for any saved training checkpoints and load all relevent info (models, optimizers, hyperparameter schedules). (See for load_training_state helper function.)
    • Consider using smaller epochs by limiting the number of batches pulled from your (shuffled) dataloader during each epoch.
      • This will cause your trai
colllin /
Created Dec 31, 2018
Deep Learning Base AMI setup script

Development Setup

$ sudo add-apt-repository ppa:jonathonf/python-3.6
$ sudo apt update
$ sudo apt install python3.6 python3.6-dev
$ wget
$ python3.6
$ rm
$ sudo pip3.6 install pipenv
colllin /
Last active Oct 29, 2018
Setting up raid on EC2 instance
# xvdg1 and xvdh1 are 2 attached volumes; md0 will be the (virtual) raid volume
mdadm -C /dev/md0 -l raid0 -c 64 -n 2 /dev/xvdg1 /dev/xvdh1
mdadm -E /dev/xvd[g-h]1
mdadm --detail /dev/md0
mkfs.ext4 /dev/md0
df -h
mount /dev/md0 /mnt/
colllin /
Last active May 28, 2019
Install NVIDIA drivers & CUDA
  1. Install NVIDIA drivers

    1. Find NVIDIA driver download link for your system at

    2. wget -P ~/Downloads/

    3. sudo rm /etc/X11/xorg.conf # It's ok if this doesn't exist

    4. NVIDIA will clash with the nouveau driver so deactivate it:

      $ sudo vim /etc/modprobe.d/blacklist-nouveau.conf
colllin /
Last active Sep 16, 2019
PyTorch AdamW optimizer
# Based on
import torch
import math
class AdamW(torch.optim.Optimizer):
"""Implements AdamW algorithm.
It has been proposed in `Fixing Weight Decay Regularization in Adam`_.
You can’t perform that action at this time.