Skip to content

Instantly share code, notes, and snippets.

@pat-alt
Created November 15, 2021 16:27
Show Gist options
  • Save pat-alt/cc53a11470e4fb736f24bb6de2393f54 to your computer and use it in GitHub Desktop.
Save pat-alt/cc53a11470e4fb736f24bb6de2393f54 to your computer and use it in GitHub Desktop.
Loss function and its derivatives for Bayesian Logistic Regression with Laplace Approximation.
# Loss:
function 𝓁(w,w_0,H_0,X,y)
N = length(y)
D = size(X)[2]
μ = sigmoid(w,X)
Δw = w-w_0
l = - ∑( y[n] * log(μ[n]) + (1-y[n]) * log(1-μ[n]) for n=1:N) + 1/2 * Δw'H_0*Δw
return l
end
# Gradient:
function ∇𝓁(w,w_0,H_0,X,y)
N = length(y)
μ = sigmoid(w,X)
Δw = w-w_0
g = ∑((μ[n]-y[n]) * X[n,:] for n=1:N)
return g + H_0*Δw
end
# Hessian:
function ∇∇𝓁(w,w_0,H_0,X,y)
N = length(y)
μ = sigmoid(w,X)
H = ∑(μ[n] * (1-μ[n]) * X[n,:] * X[n,:]' for n=1:N)
return H + H_0
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment