Skip to content

Instantly share code, notes, and snippets.

@dfm
Created December 1, 2012 02:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save dfm/4180207 to your computer and use it in GitHub Desktop.
Save dfm/4180207 to your computer and use it in GitHub Desktop.
Stochastic gradient descent example
import numpy as np
N = 5000
I_true = np.random.randn(N)
D = I_true[:, None] - I_true[None, :]
I0 = np.zeros_like(I_true)
eta = 1.0 / N # Learning rate.
tol = 1.25e-11 # Error tolerance.
inds = np.arange(N)
for n in range(500):
np.random.shuffle(inds)
for i in inds:
I0[i] += eta * np.sum(D[i, :] - I0[i, None] + I0[None, :])
error = np.sum((D - I0[:, None] + I0[None, :]) ** 2) / N
print error
if error < tol:
break
print np.sum((I0 - I_true) ** 2) / N
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment