Skip to content

Instantly share code, notes, and snippets.

@PetrochukM
PetrochukM / hyperband.py
Last active April 11, 2023 06:39
Here we implement hyperband and successive halving adaptions. We found that the original hyperband implementation was messy and not tested. We also wanted to adapt it to include model reuse.
"""
We implement additional hyperparameter optimization methods not present in
https://scikit-optimize.github.io/.
Gist: https://gist.github.com/Deepblue129/2c5fae9daf0529ed589018c6353c9f7b
"""
import math
import logging
import random
@ilblackdragon
ilblackdragon / seq2seq.py
Last active May 22, 2022 21:42
Example of Seq2Seq with Attention using all the latest APIs
import logging
import numpy as np
import tensorflow as tf
from tensorflow.contrib import layers
GO_TOKEN = 0
END_TOKEN = 1
UNK_TOKEN = 2
@teknikqa
teknikqa / lastfm_delete_loved.js
Created May 7, 2017 06:38
Bulk delete Last.FM scrobbles & loved tracks
// On the Last.FM website go to the page which lists the tracks that you have loved.
// Open Chrome DevTools (or Firefox or any modern browser that has a built in Javacript Console)
// and run the following command.
// This basically clicks on all the delete buttons on the page and reloads the page.
jQuery('.love-button--loved').each(function(_, b) {
b.click();
});
location.reload();
@noname01
noname01 / tf_seq2seq_single_str_inference.py
Created April 30, 2017 18:25
Quick hack for loading seq2seq model and inference via feed_dict.
from pydoc import locate
import tensorflow as tf
import numpy as np
from seq2seq import tasks, models
from seq2seq.training import utils as training_utils
from seq2seq.tasks.inference_task import InferenceTask, unbatch_dict
class DecodeOnce(InferenceTask):
'''
@spitis
spitis / bnlstm.py
Created February 2, 2017 03:05
Batch normalized LSTM Cell for Tensorflow
"""adapted from https://github.com/OlavHN/bnlstm to store separate population statistics per state"""
import tensorflow as tf, numpy as np
RNNCell = tf.nn.rnn_cell.RNNCell
class BNLSTMCell(RNNCell):
'''Batch normalized LSTM as described in arxiv.org/abs/1603.09025'''
def __init__(self, num_units, is_training_tensor, max_bn_steps, initial_scale=0.1, activation=tf.tanh, decay=0.95):
"""
* max bn steps is the maximum number of steps for which to store separate population stats
"""
@greydanus
greydanus / dynamic_plotting.py
Created October 14, 2016 18:20
Dynamic plotting for matplotlib
"Dynamic plotting in matplotlib. Copy and paste into a Jupyter notebook."
# written October 2016 by Sam Greydanus
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import time
def plt_dynamic(x, y, ax, colors=['b']):
for color in colors:
ax.plot(x, y, color)
@danijar
danijar / blog_tensorflow_scope_decorator.py
Last active January 17, 2023 01:58
TensorFlow Scope Decorator
# Working example for my blog post at:
# https://danijar.github.io/structuring-your-tensorflow-models
import functools
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def doublewrap(function):
"""
A decorator decorator, allowing to use the decorator to be used without
@seominjoon
seominjoon / tensorflow-install-gpu-user.sh
Last active July 28, 2017 04:59
Installing tensorflow on gpu machine with user access only. Assumes GPU driver, CUDA, and CUDNN are installed.
# On Ubuntu 14.04 with Titan X (compute capability 5.2)
# There are some missing parts that you have to specify, and some files you have to download manaully from web, so don't run this script file as it is.
# Configure CUDA paths
echo export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64" >> ~/.bashrc
echo export CUDA_HOME="/usr/local/cuda" >> ~/.bashrc
source ~/.bashrc
# Set up java
# Dependencies for Bazel
# Download jdk 8
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

@inverse
inverse / lastfmextra.user.js
Last active February 10, 2016 06:02
Last.fm Extra
// ==UserScript==
// @name Last.fm Extra
// @namespace https://malachisoord.com/
// @version 0.1.3
// @description Provide missing extra functionality to Last.fm
// @author Malachi Soord
// @match http://www.last.fm/*
// @require https://code.jquery.com/jquery-2.2.0.min.js
// @run-at document-idle
// ==/UserScript==