Skip to content

Instantly share code, notes, and snippets.

View clementfarabet's full-sized avatar

Clement Farabet clementfarabet

View GitHub Profile
----------------------------------------------------------------------
-- An implementation of SGD adapted with features of Nesterov's
-- Accelerated Gradient method, based on the paper
-- On the Importance of Initialization and Momentum in Deep Learning
-- Sutsveker et. al., ICML 2013
--
-- ARGS:
-- opfunc : a function that takes a single input (X), the point of
-- evaluation, and returns f(X) and df/dX
-- x : the initial point
@clementfarabet
clementfarabet / laplace_torch.lua
Created February 7, 2013 07:18
Cleaned/optimized version of this original Gist: https://gist.github.com/dfarmer/4727094
require 'torch'
function calc(N, Niter)
-- init conditions:
local u = torch.zeros(N,N)
u[1] = 1
-- Assume u is square
local nx,ny = u:size(1), u:size(2)