Skip to content

Instantly share code, notes, and snippets.

View ssampang's full-sized avatar

Sid Sampangi ssampang

  • San Francisco, CA
View GitHub Profile
>>> theano.test()
Theano version 0.7.0.dev-dab522df32a84085cb9f60beaf012ae010a07b82
theano is installed in /home/sid/.local/lib/python2.7/site-packages/theano
NumPy version 1.8.2
NumPy is installed in /usr/lib/python2.7/dist-packages/numpy
Python version 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2]
nose version 1.3.1
/home/sid/.local/lib/python2.7/site-packages/theano/misc/pycuda_init.py:34: UserWarning: PyCUDA import failed in theano.misc.pycuda_init
warnings.warn("PyCUDA import failed in theano.misc.pycuda_init")
Using gpu device 0: GeForce GTX TITAN X (CNMeM is disabled)
Training CNN with regularization lambda 0.001
Cross validation fold 1
Epoch 1 of 10 took 1.046s training loss: 0.656864 validation loss: 6.760141 validation accuracy:0.00 %
Epoch 2 of 10 took 1.015s training loss: 1.034537 validation loss: 1.555634 validation accuracy:0.00 %
Epoch 3 of 10 took 1.014s training loss: 0.873159 validation loss: 1.305256 validation accuracy:0.00 %
Epoch 4 of 10 took 1.015s training loss: 0.810607 validation loss: 1.400610 validation accuracy:0.00 %
Epoch 5 of 10 took 1.016s training loss: 0.811552 validation loss: 1.254042 validation accuracy:0.00 %
Epoch 6 of 10 took 1.015s training loss: 0.759900 validation loss: 1.320120 validation accuracy:0.00 %
Epoch 7 of 10 took 1.016s training loss: 0.752771 validation loss: 1.223576 validation accuracy:0.41 %
Epoch 8 of 10 took 1.017s training loss: 0.728501 validation loss: 1.097762 validation accuracy:0.86 %
@ssampang
ssampang / HBNet.py
Last active December 29, 2015 11:37
import cPickle, time, random, operator, os, math, numpy as np
import lasagne, theano, theano.tensor as T
#learning_rate = 0.01
#momentum = 0.9
class HBNet:
batch_size = 256
old = lasagne.layers.get_all_param_values(self.output_layer)
new = []
for layer in old:
shape = layer.shape
if len(shape)<2:
shape = (shape[0], 1)
W= lasagne.init.GlorotUniform()(shape)
if W.shape != layer.shape:
W = np.squeeze(W, axis= 1)
new.append(W)
@ssampang
ssampang / VRClassReward.lua
Created April 18, 2016 00:05
the primary modification is lines 37-42
------------------------------------------------------------------------
--[[ VRClassReward ]]--
-- Variance reduced classification reinforcement criterion.
-- input : {class prediction, baseline reward}
-- Reward is 1 for success, Reward is 0 otherwise.
-- reward = scale*(Reward - baseline) where baseline is 2nd input element
-- Note : for RNNs with R = 1 for last step in sequence, encapsulate it
-- in nn.ModuleCriterion(VRClassReward, nn.SelectTable(-1))
------------------------------------------------------------------------
local VRClassReward, parent = torch.class("nn.VRClassReward", "nn.Criterion")
done = 0;
repeat i : 4 {
if (done == 0 and (x & (1 << i) == 0)) {
x = x | (1 << i);
}
else {
done = 1;
}
}
set nu
" tabs
set ts=2
set shiftwidth=2
set expandtab
set autoindent
set smartindent
set smarttab
1: ~/.config/nvim/init.vim
2: ~/.vimrc
3: ~/.config/nvim/autoload/plug.vim
4: /usr/share/nvim/runtime/ftoff.vim
5: /usr/share/nvim/runtime/filetype.vim
6: /usr/share/nvim/runtime/ftplugin.vim
7: /usr/share/nvim/runtime/indent.vim
8: /usr/share/nvim/runtime/syntax/syntax.vim
9: /usr/share/nvim/runtime/syntax/synload.vim
sudo tee /etc/sudoers.d/$USER <<END
$USER $(hostname) = NOPASSWD: /usr/bin/apt-get, /usr/bin/chsh, /usr/sbin/adduser/, /bin/rm
END
sudo apt-get -y update
sudo apt-get -y upgrade
########## git ##########
sudo apt-get -y install git
Sequential/Incremental reconstruction
Perform incremental SfM (Initial Pair Essential + Resection).
- Features Loading -
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Track building