Skip to content

Instantly share code, notes, and snippets.

View marcociccone's full-sized avatar

Marco Ciccone marcociccone

View GitHub Profile
@marcociccone
marcociccone / edit_distance_marginal.py
Created November 2, 2017 21:45 — forked from norouzi/edit_distance_marginal.py
Reward Augmented Maximum Likelihood (RAML; https://arxiv.org/pdf/1609.00150.pdf) -- Python code snippet to compute marginal distribution of different #edits for a given sequence length, temperature, and vocab size.
import scipy.misc as misc
import numpy as np
len_target = 20
v = 60 # Vocabulary size
T = .9 # Temperature
max_edits = len_target
x = np.zeros(max_edits)
for n_edits in range(max_edits):
@marcociccone
marcociccone / gist:ab2364acdeaccdf3b99adde9f2bbc5fb
Created October 1, 2017 18:07
Install Vim 8 with Python, Python 3, Ruby and Lua support on Ubuntu 16.04
sudo apt-get remove --purge vim vim-runtime vim-gnome vim-tiny vim-gui-common
sudo apt-get install liblua5.1-dev luajit libluajit-5.1 python-dev ruby-dev libperl-dev libncurses5-dev libatk1.0-dev libx11-dev libxpm-dev libxt-dev
#Optional: so vim can be uninstalled again via `dpkg -r vim`
sudo apt-get install checkinstall
sudo rm -rf /usr/local/share/vim /usr/bin/vim
cd ~
@marcociccone
marcociccone / gist:064119f9b0c85022c3e94246e1a5fd76
Created September 22, 2016 07:55 — forked from karpathy/gist:f3ee599538ff78e1bbe9
Batched L2 Normalization Layer for Torch nn package
--[[
This layer expects an [n x d] Tensor and normalizes each
row to have unit L2 norm.
]]--
local L2Normalize, parent = torch.class('nn.L2Normalize', 'nn.Module')
function L2Normalize:__init()
parent.__init(self)
end
function L2Normalize:updateOutput(input)
@marcociccone
marcociccone / multiple_learning_rates.lua
Created September 22, 2016 07:46 — forked from farrajota/multiple_learning_rates.lua
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@marcociccone
marcociccone / multiple_learning_rates.lua
Created September 22, 2016 07:46 — forked from farrajota/multiple_learning_rates.lua
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@marcociccone
marcociccone / .gitignore
Created November 8, 2015 16:57 — forked from rbochet/.gitignore
.gitignore file for LaTeX projects on Mac
# Latex files
*.aux
*.glo
*.idx
*.log
*.toc
*.ist
*.acn
*.acr