Skip to content

Instantly share code, notes, and snippets.

import argparse
# idioms and their meanings are from https://en.wikipedia.org/wiki/English-language_idioms
# you'll need hugging face transformers, numpy, pytorch, and matplotlib for this demo
import numpy as np
from transformers import pipeline, AutoTokenizer, AutoModel
import matplotlib.pyplot as plt
@riveSunder
riveSunder / gol_pytorch_benchmarks.ipynb
Created November 5, 2020 04:55
Speeding Up Conway's Game of Life in PyTorch.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@riveSunder
riveSunder / adam_update.py
Created September 19, 2020 17:39
Adam Update
"""
Just an example of computing the adam update for a list of parameter gradients.
"""
def adam_update(l_grad, l_m=None, l_v=None):
# l_n = list of running exponential average of first moment of gradient
# l_v = list of running exponential average of second moment of gradient
# l_grad = list of gradients of current batch
β1 = 0.9
@riveSunder
riveSunder / example_results.txt
Created September 3, 2020 20:12
Benchmarks comparing Autograd, JAX, and PyTorch for matmuls and convolutions, with emphasis on RL agent mock rollouts.
matmuls ag: 1.96e-03 s, JAX: 7.68e-01, JAX no jit: 1.07e-01. n = 1, dim_x = 64, steps = 1000
matmuls ag: 1.98e-03 s, JAX: 2.89e-01, JAX no jit: 1.09e-01. n = 1, dim_x = 256, steps = 1000
matmuls ag: 2.16e-03 s, JAX: 2.96e-01, JAX no jit: 1.13e-01. n = 1, dim_x = 1024, steps = 1000
matmuls ag: 2.71e-03 s, JAX: 2.97e-01, JAX no jit: 1.14e-01. n = 1, dim_x = 4096, steps = 1000
matmuls ag: 3.44e-03 s, JAX: 3.51e-01, JAX no jit: 1.14e-01. n = 1, dim_x = 8192, steps = 1000
matmuls ag: 2.23e-03 s, JAX: 2.43e-01, JAX no jit: 1.07e-01. n = 32, dim_x = 64, steps = 1000
matmuls ag: 2.94e-03 s, JAX: 1.83e-01, JAX no jit: 1.09e-01. n = 32, dim_x = 256, steps = 1000
matmuls ag: 7.45e-03 s, JAX: 2.41e-01, JAX no jit: 1.31e-01. n = 32, dim_x = 1024, steps = 1000
matmuls ag: 1.27e-02 s, JAX: 2.42e-01, JAX no jit: 1.11e-01. n = 32, dim_x = 4096, steps = 1000
matmuls ag: 2.21e-02 s, JAX: 2.46e-01, JAX no jit: 1.45e-01. n = 32, dim_x = 8192, steps = 1000
@riveSunder
riveSunder / call_train.jl
Created August 30, 2020 21:47
Code Snippets for XOR Tutorial
dim_x = 3
dim_h = 4
dim_y = 1
l2_reg = 1e-4
lr = 1e-2
max_steps = 1400000
θ = init_weights(dim_x, dim_y, dim_h)
x, y = get_xor(1024, dim_x)
println(size(x))
plt = violin([" "], reshape(θ[:wxh],dim_x * dim_h), label="wxh", title="Weights", alpha = 0.5)
@riveSunder
riveSunder / dungeon_gpt3_2.md
Last active November 10, 2022 13:56
Prompt experiments with GPT-3 on AIDungeon

Dragon (GPT-3):

Model prompts in bold, multiple outputs to the same input denoted by ...

You are an artificial intelligence enthusiast working on an article highlighting the capabilities of a massive new language model called GPT-3, especially as compared to its smaller predecessor GPT-2. GPT-3 has increased the number of parameters more than 100-fold over GPT-2, from 1.5 billion to 175 billion parameters. As a result, the new model can generate text that reads eerily like a human. For example, prompting GPT-3 with the text "One way to fight the climate crisis is to cryogenically preserve half of all humanity, indefinitely", GPT-3 generates:

"To stop global warming we must create a cryogenic storage facility for humans and other life forms."

The article you are writing about is going to be based around this new technology, so you have been spending a lot of time playing around with it. You have also been using your own brain to test out the new models, which is something no one else in the world

@riveSunder
riveSunder / onn_smiley.py
Created July 2, 2020 21:02
Training a multi-slice optical system to produce a target output image with gradient descent.
import autograd.numpy as np
from autograd import grad
import matplotlib.pyplot as plt
import time
import skimage
import skimage.io as sio
def asm_prop(wavefront, length=32.e-3, wavelength=550.e-9, distance=10.e-3):
@riveSunder
riveSunder / autograd_mlp.py
Created June 26, 2020 19:20
Simple MLP demo using [autograd](https://github.com/HIPS/autograd)
"""
Simple MLP demo using [autograd](https://github.com/HIPS/autograd)
With l1 and l2 regularization.
Depends on autograd and scikit-learn (the latter for the mini digits dataset)
pip install autograd scikit-learn
"""
from autograd import numpy as np
@riveSunder
riveSunder / 40-libinput.conf
Created April 21, 2020 20:01
Enable scroll for Logitech Trackman Marble trackball on Ubuntu (tested on 18.04 on 2020-04-21) for right-handed use
# Credit goes to adrianp at https://gist.github.com/adrianp/a5458eeb2038a6240282b0d898e0391d for the solution.
# adrianp further credits https://wiki.archlinux.org/index.php/Logitech_Marble_Mouse#Using_libinput
# This version has the buttons arranged for righties.
# Big left button is left click, big right button is right click, and small left button is scroll
# for left-handed use (scroll with small right button) change '"ScrollButton" "8"' to '"ScrollButton" "9"'
# and also change 'Option "ButtonMapping" "3 9 1 4 5 6 7 2 2"' if you want the right and left-click buttons reversed
# add the folowing to /usr/share/X11/xorg.conf.d/40-libinput.conf
Section "InputClass"
Identifier "Marble Mouse"
import matplotlib.pyplot as plt
"""
a modification of craffel's `draw_neural_net` https://gist.github.com/craffel/2d727968c3aaebd10359
changes:
updated for python3 (xrange --> range)
function now takes layers themselves as an input instead of the layer dimensions
function draws connections according to weights, negative weights are colored red (positive are black)
and the width of each connection is determined by the weight magnitude.
"""