Skip to content

Instantly share code, notes, and snippets.

@denzilc
Created November 1, 2011 21:57
Show Gist options
  • Save denzilc/1332063 to your computer and use it in GitHub Desktop.
Save denzilc/1332063 to your computer and use it in GitHub Desktop.
Gradient Descent for the Machine Learning course at Stanford
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
theta_len = length(theta);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
theta -= (alpha/m) * (X' * (X*theta - y));
% temp_theta = theta;
% for j = 1:theta_len
% value = 0;
%
% for i = 1:m
% value += (X(i,:) * theta- y(i,:)) * X(i,j);
% end
%
% temp_theta(j,:) = temp_theta(j,:) - ((alpha/m)*value);
% end
%
% theta = temp_theta;
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
@AnkitaBh
Copy link

When I try to implement my version, my theta ends up as 0; has anyone else had that issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment