Skip to content

Instantly share code, notes, and snippets.

Avatar

Collin Donahue-Oponski colllin

View GitHub Profile
@colllin
colllin / flatten_action.py
Created Aug 19, 2020
OpenAI Gym FlattenAction wrapper
View flatten_action.py
import gym
class FlattenAction(gym.ActionWrapper):
"""Action wrapper that flattens the action."""
def __init__(self, env):
super(FlattenAction, self).__init__(env)
self.action_space = gym.spaces.utils.flatten_space(self.env.action_space)
def action(self, action):
return gym.spaces.utils.unflatten(self.env.action_space, action)
View Readme.md
> CreateIndex({
    name: 'users_by_messageSentAt_desc',
    source: Collection('messages'),
    values: [
        {field: ['data','sentAt'], reverse: true},
        {field: ['data','fromUser']}
    ]
})
@colllin
colllin / query-mailgun-emails-by-custom-vars.js
Created Jan 3, 2020
Query Mailgun emails by custom variables / custom data
View query-mailgun-emails-by-custom-vars.js
const util = require('util');
const _ = require('lodash');
const mailgun = require("mailgun-js");
async function queryByCustomData(data) {
let mg = mailgun({apiKey: process.env.MAILGUN_API_KEY});
let asyncGet = util.promisify(_.bind(mg.get, mg));
let response = await asyncGet(`/YOUR_MAIL_DOMAIN/events`, {
"user-variables": JSON.stringify(data),
});
@colllin
colllin / mount-ec2-volume.md
Last active Nov 5, 2019
Mount an EC2 volume
View mount-ec2-volume.md
  • Create & attach volume in AWS console (must be in same availability zone).
  • From your ec2 instance, list attached volumes:
    sudo fdisk -l
    
  • Find the disk in the list and copy it's identifier to your clipboard, e.g. /dev/nvme1n1.
  • If it's a brand new volume, you probably need to format it:
    Don't do this to a volume with data on it, or else it won't have data on it anymore. You can skip this step and come back to it if you get an error about wrong fs type when trying to mount the disk.
@colllin
colllin / Readme.md
Last active Dec 31, 2020
FaunaDB User Token Expiration (for ABAC)
View Readme.md

Auth0 + FaunaDB ABAC integration: How to expire Fauna user secrets.

Fauna doesn't (yet?) provide guaranteed expiration/TTL for ABAC tokens, so we need to implement it ourselves if we care about it.

What's in the box?

3 javascript functions, each of which can be imported into your project or run from the command-line using node path/to/script.js arg1 arg2 ... argN:

  1. deploy-schema.js: a javascript function for creating supporting collections and indexes in your Fauna database.
@colllin
colllin / Readme.md
Last active Aug 11, 2020
Auth0 + FaunaDB integration strategy
View Readme.md

Goal

Solutions

At the very least, we need two pieces of functionality:

  1. Create a user document in Fauna to represent each Auth0 user.
  2. Exchange an Auth0 JWT for a FaunaDB user secret.
@colllin
colllin / Find Learning Rate.ipynb
Last active Mar 2, 2020
Learning Rate Finder in PyTorch
View Find Learning Rate.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@colllin
colllin / log_profile.py
Last active May 15, 2019
Utility for logging system profile to tensorboardx during pytorch training.
View log_profile.py
import torch
import psutil
import numpy as np
def log_profile(summaryWriter, step, scope='profile', cpu=True, mem=True, gpu=torch.cuda.is_available(), disk=['read_time', 'write_time'], network=False):
if cpu:
cpu_usage = np.array(psutil.cpu_percent(percpu=True))
summaryWriter.add_scalars(f'{scope}/cpu/percent', {
'min': cpu_usage.min(),
'avg': cpu_usage.mean(),
@colllin
colllin / Install NVIDIA Driver and CUDA.md
Last active Nov 2, 2019 — forked from wangruohui/Install NVIDIA Driver and CUDA.md
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
View Install NVIDIA Driver and CUDA.md
@colllin
colllin / Readme.md
Last active Mar 14, 2019
Example startup script / boot script "user data" for running machine learning experiments on EC2 Spot Instances with git & dvc
View Readme.md

Prerequisites

  • Write your training script so that it can be killed, and then automatically resumes from the beginning of the current epoch when restarted. (See train-example.py for an example training loop incorporating these recommendations.)
    • Save checkpoints at every epoch... (See utils.py for save_training_state helper function.)
      • model(s)
      • optimizer(s)
      • any hyperparameter schedules — I usually write the epoch number to a JSON file and compute the hyperparameter schedules as a function of the epoch number.
    • At the beginning of training, check for any saved training checkpoints and load all relevent info (models, optimizers, hyperparameter schedules). (See utils.py for load_training_state helper function.)
    • Consider using smaller epochs by limiting the number of batches pulled from your (shuffled) dataloader during each epoch.
      • This will cause your trai