Skip to content

Instantly share code, notes, and snippets.

@SimonAB
Created February 20, 2020 22:10
Show Gist options
  • Save SimonAB/f0385a15e2eb6d185cc0292a5a845a23 to your computer and use it in GitHub Desktop.
Save SimonAB/f0385a15e2eb6d185cc0292a5a845a23 to your computer and use it in GitHub Desktop.
"""
lin_reg_grad_descent(X, y, α, fit_intercept=true, n_iter=2000)
This function uses gradient descent algorithm to find the best weights (θ)
that minimises the mean squared loss between the predictions that the model
generates and the target vector (y).
A tuple of 1D vectors representing the weights (θ)
and a history of loss at each iteration (𝐉) is returned.
"""
function lin_reg_grad_descent(X, y, α, fit_intercept=true, n_iter=2000)
# Initialize some useful values
m = length(y) # number of training examples
if fit_intercept
# Add a constant of 1s if fit_intercept is specified
constant = ones(m, 1)
X = hcat(constant, X)
else
X # Assume user added constants
end
# Use the number of features to initialise the theta θ vector
n = size(X)[2]
θ = zeros(n)
# Initialise the cost vector based on the number of iterations
𝐉 = zeros(n_iter)
for iter in range(1, stop=n_iter)
pred = X * θ
# Calcaluate the cost for each iter
𝐉[iter] = mean_squared_cost(X, y, θ)
# Update the theta θ at each iter
θ = θ - ((α/m) * X') * (pred - y);
end
return (θ, 𝐉)
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment