Skip to content

Instantly share code, notes, and snippets.

View codekansas's full-sized avatar
🏠
Working from home

Ben Bolte codekansas

🏠
Working from home
View GitHub Profile
@codekansas
codekansas / sparse_filtering.py
Last active February 20, 2021 18:12
Implementation of sparse filtering using TensorFlow
"""Implementation of sparse filtering using TensorFlow.
Original MATLAB code: https://github.com/jngiam/sparseFiltering
Paper: https://papers.nips.cc/paper/4334-sparse-filtering.pdf
"""
# For Python 3 compatibility.
from __future__ import print_function
# For building the algorithm.
@codekansas
codekansas / maximum_noise_entropy.py
Last active February 21, 2019 08:41
Maximum Noise Entropy implementation using TensorFlow
"""Implementation of maximum noise entropy using TensorFlow.
Paper: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002249
"""
# For Python 3 compatibility.
from __future__ import print_function
# For building the algorithm.
import tensorflow as tf
#!/usr/bin/env python
"""Example of building a model to solve an XOR problem in Keras."""
import keras
import numpy as np
# XOR data.
x = np.array([
[0, 1],
[1, 0],
@codekansas
codekansas / theano_two_layer.py
Created April 12, 2017 18:34
Two layer neural network in Theano.
import theano
import theano.tensor as T
import numpy as np
X = theano.shared(value=np.asarray([[1, 0], [0, 0], [0, 1], [1, 1]]), name='X')
y = theano.shared(value=np.asarray([[1], [0], [1], [0]]), name='y')
rng = np.random.RandomState(1234)
LEARNING_RATE = 0.01
#!/usr/bin/env python
"""The training script for the DANN model."""
from __future__ import division
from __future__ import print_function
import csv
import os
import itertools
import sys
@codekansas
codekansas / factors.scm
Created September 18, 2017 19:52
Program for finding all the factors of a number in Scheme.
; Finds all factors of a number in O(sqrt n) time.
(define (factors n)
(define (@factors n i a)
(cond ((= (modulo n i) 0) (@factors (quotient n i) i (cons i a)))
((>= (* i i) n) (if (= 1 n) a (cons n a)))
(else (@factors n (+ i 1) a))))
(@factors n 2 `()))
; Multiples all the elements in a list.
(define (mult l)
@codekansas
codekansas / binarized_nn_inference.cpp
Created November 1, 2017 02:25
Efficient binarized neural network inference
/* Binarized neural network inference example.
This shows a simple C++ program for doing inference on
binarized neural networks. To do this efficiently, the code
below makes use of the "bitset" class, which uses the "popcnt"
instruction to count the number of 1's that show up in the
matrix product, in constant time. This means that a matrix
multiplication between a (A, B) and (B, C) matrix takes
O(A * C) time; in other words, each value in the output matrix
is computed in constant time.
*/
@codekansas
codekansas / bert_pytorch.py
Created November 1, 2018 10:25
Implementation of the transformer block used by BERT
#!/usr/bin/env python3
"""Implementation of the transformer block used by BERT.
I saw an excellent implementation of the complete BERT model here:
https://github.com/codertimo/BERT-pytorch
I re-wrote a simplified version of the transformer block below. This was mainly
for my own understanding (so that I could get a grasp of the dimensions and
how the whole attention mechanism works), but I tried to document it pretty
thoroughly so that other people can understand it without having to go too far
// Parametric OpenSCAD design for a laptop stand.
// Laptop dimensions.
lwid = 304.1;
lhei = 212.4;
ldep = 15;
// Stand long dimensions.
slen = 200;
sswid = 60;
thickness = 3;
padding = 1;
slat_size = 3;
short_length = 46.36;
long_length = 61.76;
height = 51.76;
first_indent = 15.92;
second_indent = 29.97;