Skip to content

Instantly share code, notes, and snippets.

@RahulDas-dev
Last active February 2, 2023 14:28
Show Gist options
  • Save RahulDas-dev/6912c9c4c305ad3538b8d961ab2c2c31 to your computer and use it in GitHub Desktop.
Save RahulDas-dev/6912c9c4c305ad3538b8d961ab2c2c31 to your computer and use it in GitHub Desktop.
Gradient Descend implementation , Finding minimum of Loss Function , Loss Function Optimization

The L(W) is a loss function of a ML algorithm given bellow,

L(W) = L(w₁,w₂) = 0.75(w₁ −2)² + 0.35(w₂−4)² Where W is the model parameter.

Use the gradient descent algorithm to minimise the loss function and find the Optimim value of W. initial Value of W = [ 10, 10 ]

Loss function :

    L(w₁,w₂) = 0.75(w₁ −2)² + 0.35(w₂−4)² 
    L(w₁,w₂) and this a Scaler quantity

Gradient of Loss function :

    ∇ L = [ ∂ L ⁄ ∂ W₁ , ∂ L⁄ ∂ W₂ ]ᵀ 
    ∇ L = [0.75×2×(w₁ −2), 0.35×2×(w₂−4)]ᵀ 
    ∇ L and this a Vector 

Hyper Parameters :

    Learning Rate η =  0.1
    iteration or epoch N = 100 

Gradient Descent Algorithem

  1. If iteration counter is more then N the go to step 7 else 2
  2. Compute loss gradient .
  3. Compute new W uisng = Old W - Learning Rate X loss gradient
  4. Compute loss for new W
  5. Log the loss and iteration counter value.
  6. Go to step 1.
  7. Return the W and loss
import numpy as np
### Loss function : L(w₁,w₂) = 0.75(w₁ −2)² + 0.35(w₂−4)²
### Gradient of Loss function : ∇ L = [0.75×2×(w₁ −2), 0.35×2×(w₂−4)]ᵀ
### Hyper Parameters : Learning Rate η = 0.1 , iteration or epoch N = 100
loss_f = lambda w1,w2: 0.75*(w1-2)**2 + 0.35*(w2-4)**2
grad_f = lambda w1,w2: np.array([1.5*(w1-2),0.7*(w2-4)])
eta, epoch = 0.1 ,100
w = np.array([10,10])
loss = 0
for i in range(epoch):
grad = grad_f(w[0],w[1]) # step 2. Compute loss gradient
w = w - eta*grad # step 3. Compute new W Vectors
loss_prev = loss
loss = loss_f(w[0],w[1]) # step 4. Compute loss for new W
print(f'Iteration {i+1:03}, Loss {loss:.4f}') # step 5. Logging loss & iteration
if abs(loss-loss_prev)< 0.0001: # Extra step to break the loop early by
break # detecting no cahnge loss value.
print()
print(f'Final Weights {w}, Loss {loss:.4f}') # Logging final weights and loss.
print(f'Final Loss {loss:.4f} & Max Iteration {i+1}')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment