Skip to content

Instantly share code, notes, and snippets.

View willtebbutt's full-sized avatar

Will Tebbutt willtebbutt

View GitHub Profile
@willtebbutt
willtebbutt / hmc_gaussian_prior_degeneracy.jl
Created September 5, 2023 11:40
Acceptance rate drops off as dimension increases, despite simple density.
using Pkg
Pkg.activate(; temp=true)
pkg"add AdvancedHMC, LogDensityProblems, LinearAlgebra, Plots, Random, Zygote, Statistics"
using AdvancedHMC, LogDensityProblems, LinearAlgebra, Plots, Random, Zygote, Statistics
# Define the target distribution using the `LogDensityProblem` interface
struct LogTargetDensity
dim::Int
end
LogDensityProblems.logdensity(::LogTargetDensity, θ) = -sum(abs2, θ) / 2
using AbstractGPs, KernelFunctions
# Generate toy data.
num_dims_in = 5
num_dims_out = 4
num_obs = 100
X = randn(num_obs, num_dims_in)
Y = randn(num_obs, num_dims_out)
# Convert to format required for AbstractGPs / KernelFunctions.
@willtebbutt
willtebbutt / gp_gpu.jl
Last active February 6, 2022 14:39
GPs on the GPU
using Revise
using AbstractGPs
using BenchmarkTools
using CUDA
using KernelFunctions
using LinearAlgebra
using Random
using AbstractGPs: AbstractGP, FiniteGP, ZeroMean
@willtebbutt
willtebbutt / toy_chainrules_rmad.jl
Last active April 16, 2019 20:46
Toy tape-based reverse-mode AD with minimal Cassette usage.
#
# This uses the Nabla.jl-style interception mechanism whereby
# we wrap things that are to be differentiated w.r.t. in a
# thin wrapper. There are lots of thing that you can't
# propoagate derivative information through with this kind of
# approach without quite a lot of extra machinery, but the
# examples at the bottom do work.
#
using ChainRules, Cassette