Skip to content

Instantly share code, notes, and snippets.

View madisonmay's full-sized avatar

Madison May madisonmay

View GitHub Profile
@Newmu
Newmu / simple_gan.py
Created July 10, 2015 20:39
Simple Generative Adversarial Network Demo
import os
import numpy as np
from matplotlib import pyplot as plt
from time import time
from foxhound import activations
from foxhound import updates
from foxhound import inits
from foxhound.theano_utils import floatX, sharedX
@mangecoeur
mangecoeur / concurrent.futures-intro.md
Last active January 9, 2024 16:04
Easy parallel python with concurrent.futures

Easy parallel python with concurrent.futures

As of version 3.3, python includes the very promising concurrent.futures module, with elegant context managers for running tasks concurrently. Thanks to the simple and consistent interface you can use both threads and processes with minimal effort.

For most CPU bound tasks - anything that is heavy number crunching - you want your program to use all the CPUs in your PC. The simplest way to get a CPU bound task to run in parallel is to use the ProcessPoolExecutor, which will create enough sub-processes to keep all your CPUs busy.

We use the context manager thusly:

with concurrent.futures.ProcessPoolExecutor() as executor:
@Slater-Victoroff
Slater-Victoroff / gist:8543588
Last active January 4, 2016 00:39
Simplest Possible Geometric Restricted Boltzmann Machine. Doesn't include training, just random generation and prediction for now.
import numpy as np
def transfer_function(x, y):
return np.power(np.prod(x, axis=1)[:, None] * np.prod(y, axis=0), 1./x.shape[1])
def gnn(c):
return normalize([np.random.random(c[i] * c[i + 1]).reshape((c[i], c[i + 1])) for i in range(len(c) - 1)])
def predict(weights, input_vector, reverse=False):
current_net = [input_vector] + weights
@Slater-Victoroff
Slater-Victoroff / PyMarkov
Last active March 28, 2022 13:55
Arbitrary ply markov constructor in python
from collections import Counter
import cPickle as pickle
import random
import itertools
import string
def words(entry):
return [word.lower().decode('ascii', 'ignore') for word in entry.split()]
def letters(entry):
@mblondel
mblondel / sparse_multiclass_numba.py
Last active January 27, 2020 14:58
Sparse Multiclass Classification in Numba!
"""
(C) August 2013, Mathieu Blondel
# License: BSD 3 clause
This is a Numba-based reimplementation of the block coordinate descent solver
(without line search) described in the paper:
Block Coordinate Descent Algorithms for Large-scale Sparse Multiclass
Classification. Mathieu Blondel, Kazuhiro Seki, and Kuniaki Uehara.
Machine Learning, May 2013.