Skip to content

Instantly share code, notes, and snippets.

View Multihuntr's full-sized avatar

Brandon Victor Multihuntr

  • La Trobe University
View GitHub Profile
@Multihuntr
Multihuntr / accumulate_grads.py
Last active August 10, 2018 08:25
Accumulating gradients to reduce memory requirement per forward pass (using MNIST)
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def simple_model(input):
# This ensures that the model will always be instantiated the same, for comparison.
hidden_initializer = tf.constant_initializer(np.random.uniform(-0.025, 0.025, size=[784,100]))
hidden = tf.layers.dense(input, 100, kernel_initializer=hidden_initializer)
out_initializer = tf.constant_initializer(np.random.uniform(-0.025, 0.025, size=[100,10]))
@Multihuntr
Multihuntr / minimal_nvvl_tests.py
Created June 11, 2018 03:45
Minimal examples of using NVIDIA/nvvl
# Simplest case (note: VideoDataset.__get__ puts frames on CPU)
import nvvl
d = nvvl.VideoDataset(['prepared.mp4'], 3)
fr = d[0]
print(type(fr))
print(fr.shape)
# Custom processing (note: VideoDataset.__get__ puts frames on CPU)
import nvvl
d = nvvl.VideoDataset(['prepared.mp4'], 3, processing={'a': nvvl.ProcessDesc()})
@Multihuntr
Multihuntr / random_queue.py
Created October 20, 2017 07:55
Blocking Thread-safe Random Queue
import queue
import random
class RandomQueue(queue.Queue):
def _put(self, item):
n = len(self.queue)
i = random.randint(0, n)
self.queue.insert(i, item)