Skip to content

Instantly share code, notes, and snippets.

@amankharwal
Created February 12, 2021 09:41
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save amankharwal/0ca69671ca650f9a3e856d7e6a93b6b1 to your computer and use it in GitHub Desktop.
Save amankharwal/0ca69671ca650f9a3e856d7e6a93b6b1 to your computer and use it in GitHub Desktop.
# Stochastic Gradient Descent
n_epochs = 50
t0, t1 = 5, 50 #learning schedule hyperparameters
m = 100
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
def learning_schedule(t):
return t0/(t+t1)
theta = np.random.randn(2,1) #random initilization
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta)-yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
print(theta)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment