Skip to content

Instantly share code, notes, and snippets.

View redwrasse's full-sized avatar

redwrasse

View GitHub Profile
@redwrasse
redwrasse / toy_stwalk.py
Last active May 20, 2021 23:59
toy implementation of 'space-time as a contrastive random walk'
# pseudocode impl
# Algorithm 1 Pseudocode in a PyTorch-like style.
# for x in loader: # x: batch with B sequences
# # Split image into patches
# # B x C x T x H x W -> B x C x T x N x h x w
# x = unfold(x, (patch_size, patch_size))
# x = spatial_jitter(x)
# # Embed patches (B x C x T x N)
# v = l2_norm(resnet(x))
@redwrasse
redwrasse / heteroscedastic.py
Last active January 8, 2021 21:23
heteroscedastic model: parameterized variance in discriminative gaussian
# -*- coding: utf-8 -*-
"""
standard discriminative gaussian
y ~ N(f(x), sigma^2)
as well as heteroscedastic model
y ~ N(f(x), sigma^2(x))
training on a dataset requiring the heteroscedastic model:
x in R, y in R^2
@redwrasse
redwrasse / gmm_gd.py
Created November 21, 2020 00:04
Attempted direct gradient descent on 2-state gaussian mixture model
# gmm_gd.py
"""
Direct gradient descent on 2-state gaussian mixture model.
Not the best way to do this, typically use the EM algorithm instead.
Training is highly unstable.
model:
p(x) = pi * phi_1 + (1-pi) * phi_2
phi_1, phi_2 ~ normal
@redwrasse
redwrasse / gd_gaussian.py
Last active November 20, 2020 23:03
gradient descent solving mu, sigma for generative gaussian
# -*- coding: utf-8 -*-
"""
generative gaussian model, minimizing <-log p>_data wrt. mu, sigma
with gradient descent
-log p = log sigma + log sqrt(2pi) + (x - mu)^2 / (2 sigma^2)
So
grad_mu (-log p) = - (x - mu) / sigma^2
grad_sigma (-log p) = 1 / sigma - (x - mu)^2 / sigma^3
@redwrasse
redwrasse / array_reinforcement_learning.py
Created November 13, 2020 17:46
Toy reinforcement learning on an array
# array_reinforcement_learning.py
"""
array_reinforcement_learning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reinforcement learning is performed on a 1-dimensional
finite state space ("array") of k elements:
S = {1,...,k}
There are two possible actions: move right (a = 1), or move left (a = -1),
@redwrasse
redwrasse / gist:221f0d2bb566c616697d3e509e31d784
Created November 12, 2020 21:51
learning query completion as all next-character models
"""
attempting to learn query completion as product of all next character models
P(x_c|x_q) = prod_i P(x_i | x_1:i-1)
Does not scale well for any reasonable sized text document(s). Need smaller length distributions, approximations.
"""
import numpy as np
import random
import tensorflow.keras as keras
@redwrasse
redwrasse / backprop_ex.py
Last active February 11, 2021 05:51
example backprop
"""
backprop algorithm on F[a] = xa^2 + a
For each node function f_i
- method to calculate value on input, f_i(x_i)
- method to calculate derivative value on input, K_ij
- method to calculate parameter derivative on input, xi_i
Suppose a single input x in R. Suppose the functional to be
optimized is F[a] = xa^2 + a
"""
Currently trains with decreasing loss
*** epoch: 0 epoch loss: 276.47448682785034
*** epoch: 1 epoch loss: 216.9058997631073
*** epoch: 2 epoch loss: 190.01888144016266
*** epoch: 3 epoch loss: 171.68642991781235
*** epoch: 4 epoch loss: 157.7317717075348
*** epoch: 5 epoch loss: 145.89844578504562
...
@redwrasse
redwrasse / conv_ar.py
Last active August 11, 2020 19:08
demonstrating auto-regressive model (motivating full generative model) as trained convolutional layer
"""
---------------------------------------------------
Output:
epoch loss: 78.85499735287158
epoch loss: 0.0008048483715437094
epoch loss: 7.917497569703835e-06
epoch loss: 7.784523854692527e-08
epoch loss: 1.082900831506084e-09
@redwrasse
redwrasse / objectrackerstep1.html
Created July 17, 2020 01:41
object tracker step 1
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Object Tracking Service</title>
<h1>Object Tracker</h1>
</head>
<body>
<video controls width="1250">