Skip to content

Instantly share code, notes, and snippets.

View pat-alt's full-sized avatar

Patrick Altmeyer pat-alt

View GitHub Profile
@pat-alt
pat-alt / world_model_projection.jl
Last active January 26, 2024 07:34
Randomly projecting features that are highly predictive of geographical coordinates and then running probes on the projections. Inspired by this tweet: https://x.com/savvyRL/status/1709698089500680264?s=20
using CounterfactualExplanations.Data: load_mnist
using CSV
using DataFrames
using Flux
using GMT
using Images
using LinearAlgebra
using MLJBase
using MLJModels
using OneHotArrays
@pat-alt
pat-alt / cp_sr.jl
Last active July 31, 2023 09:50
Symbolic Regression with Conformal Prediction Intervals.
using ConformalPrediction
using Distributions
using MLJ
using Plots
# Inputs:
N = 600
xmax = 3.0
d = Uniform(-xmax, xmax)
@pat-alt
pat-alt / simple_inductive_cp.jl
Last active October 24, 2022 11:43
Simple, inductive conformal classification in Julia. Code snippet from [ConformalPrediction.jl](https://github.com/pat-alt/ConformalPrediction.jl).
# Simple
"The `SimpleInductiveClassifier` is the simplest approach to Inductive Conformal Classification. Contrary to the [`NaiveClassifier`](@ref) it computes nonconformity scores using a designated calibration dataset."
mutable struct SimpleInductiveClassifier{Model <: Supervised} <: ConformalSet
model::Model
coverage::AbstractFloat
scores::Union{Nothing,AbstractArray}
heuristic::Function
train_ratio::AbstractFloat
end
@pat-alt
pat-alt / adapt_gradient.jl
Created May 10, 2022 11:03
Adapts the gradient for the counterfactual loss function to use CoutnerfactualExplanations.jl for a model trained in R.
import CounterfactualExplanations.Generators: ∂ℓ
using LinearAlgebra
# Countefactual loss:
function ∂ℓ(
generator::AbstractGradientBasedGenerator,
counterfactual_state::CounterfactualState)
M = counterfactual_state.M
nn = M.nn
x′ = counterfactual_state.x′
@pat-alt
pat-alt / adapt_torch_model.jl
Created May 10, 2022 11:01
Adapts a custom `torch` model trained in R for use with CounterfactualExplantions.jl.
using Flux
using CounterfactualExplanations, CounterfactualExplanations.Models
import CounterfactualExplanations.Models: logits, probs # import functions in order to extend
# Step 1)
struct TorchNetwork <: Models.AbstractFittedModel
nn::Any
end
# Step 2)
@pat-alt
pat-alt / laplace_mlp.jl
Created February 21, 2022 11:35
Laplace approximation for effortless Bayesian deep learning - MLP.
# Import libraries.
using Flux, Plots, Random, PlotThemes, Statistics, BayesLaplace
theme(:wong)
# Toy data:
xs, y = toy_data_linear(100)
X = hcat(xs...); # bring into tabular format
data = zip(xs,y)
# Build MLP:
@pat-alt
pat-alt / laplace_logit.jl
Last active February 21, 2022 11:36
Laplace approximation for effortless Bayesian deep learning - logistic regression.
# Import libraries.
using Flux, Plots, Random, PlotThemes, Statistics, BayesLaplace
theme(:wong)
# Toy data:
xs, y = toy_data_linear(100)
X = hcat(xs...); # bring into tabular format
data = zip(xs,y)
# Neural network:
@pat-alt
pat-alt / newtons_method.jl
Created November 16, 2021 11:25
Newton's method with Arminjo backtracking in Julia language.
# Newton's Method
function arminjo(𝓁, g_t, θ_t, d_t, args, ρ, c=1e-4)
𝓁(θ_t .+ ρ .* d_t, args...) <= 𝓁(θ_t, args...) .+ c .* ρ .* d_t'g_t
end
function newton(𝓁, θ, ∇𝓁, ∇∇𝓁, args; max_iter=100, τ=1e-5)
# Intialize:
converged = false # termination state
t = 1 # iteration count
θ_t = θ # initial parameters
@pat-alt
pat-alt / bayes_logreg.jl
Created November 15, 2021 16:27
Loss function and its derivatives for Bayesian Logistic Regression with Laplace Approximation.
# Loss:
function 𝓁(w,w_0,H_0,X,y)
N = length(y)
D = size(X)[2]
μ = sigmoid(w,X)
Δw = w-w_0
l = - ∑( y[n] * log(μ[n]) + (1-y[n]) * log(1-μ[n]) for n=1:N) + 1/2 * Δw'H_0*Δw
return l
end
@pat-alt
pat-alt / logit.R
Last active June 22, 2021 16:06
A simple implementation of logistic regression using iterative re-weighted least-squares. Not performance optimized, solely meant for demonstration. Largely based on http://personal.psu.edu/jol2/course/stat597e/notes2/logit.pdf.
logit <- function(X, y, beta_0=NULL, tau=1e-9, max_iter=10000) {
if(!all(X[,1]==1)) {
X <- cbind(1,X)
}
p <- ncol(X)
n <- nrow(X)
# Initialization: ----
if (is.null(beta_0)) {
beta_latest <- matrix(rep(0, p)) # naive first guess
}