Skip to content

Instantly share code, notes, and snippets.

@OlivierBinette
Last active September 29, 2021 00:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save OlivierBinette/7f2ead0b373901305a4dc7294449e9f0 to your computer and use it in GitHub Desktop.
Save OlivierBinette/7f2ead0b373901305a4dc7294449e9f0 to your computer and use it in GitHub Desktop.
[Stan templates] #templates #rstan
---
title: "Stan templates"
output: html_notebook
---
**Things to keep in mind:**
- Stan uses the *standard deviation* to parameterize the normal distribution. Use `normal(mu, sigma)` instead of `normal(mu, sigma^2)`.
- Leave a blank line at the end of the chunk defining Stan code.
```{r}
library(rstan)
```
# Normal model with a g-prior
Model:
$$
Y = X\beta + \varepsilon,\quad \varepsilon \sim N(0, \sigma^2 I_n).
$$
Prior:
$$
\beta \mid \sigma^2 \sim N(0, g \,\sigma^2 (X^TX)^{-1}),\\
\phi = 1/\sigma^2 \sim G(a/2, a\,\sigma_0^2/2)
$$
```{stan output.var = "model1", cache=TRUE}
data {
int<lower=0> n; // Number of observations
real Y[n];
int<lower=0> p; // Number of covariates
matrix[n, p] X;
// Hyperparameters
real<lower=0> g;
real<lower=0> a;
real<lower=0> sigma_0_sq;
}
parameters {
vector[p] beta;
real<lower=0> sigma;
}
transformed parameters {
real<lower=0> phi;
phi = 1 / sigma^2;
}
model {
phi ~ gamma(a / 2, a * sigma_0_sq / 2);
beta ~ multi_normal(rep_vector(0, p), g * inverse(phi * X' * X));
for (i in 1:n) {
Y[i] ~ normal(X[i] * beta, sigma);
}
}
```
```{r}
data = list(n = nrow(mtcars),
Y = mtcars$mpg,
p = 4,
X = model.matrix(mpg ~ 1 + cyl + disp + wt, mtcars),
g = nrow(mtcars),
a = 1,
sigma_0_sq = 1)
rstan::sampling(model1, data)
```
# Hierarchical model
Model:
$$
Y_{i,j} = \theta_j + \varepsilon_{i,j}, \quad \varepsilon_{i,j} \sim N(0, \sigma^2)
$$
Prior:
$$
\theta_j \mid \mu \sim N(\mu, \sigma_\theta^2 )\\
\mu \sim N(0, \sigma_\mu^2)\\
\phi = 1/\sigma^2 \sim G(a/2, a \,\sigma_0^2/2)
$$
```{stan output.var = "model2", cache=TRUE}
data {
int<lower=0> n; // Number of observations
real Y[n]; // All observations Y
int<lower=0> J; // Number of groups
int<lower=0, upper=J> jj[n]; // Assignments of observations to groups
// Hyperparameters
real<lower=0> a;
real<lower=0> sigma_0_sq;
real<lower=0> sigma_theta;
real<lower=0> sigma_mu;
}
parameters {
real theta[J];
real<lower=0> sigma;
real mu;
}
transformed parameters {
real<lower=0> phi;
phi = 1 / sigma^2;
}
model {
phi ~ gamma(a / 2, a * sigma_0_sq / 2);
mu ~ normal(0, sigma_mu);
theta ~ normal(mu, sigma_theta);
for (k in 1:n) {
Y[k] ~ normal(theta[jj[k]], sigma);
}
}
```
```{r}
data = list(n=50,
Y=rnorm(50),
J=2,
jj=c(rep(1, 25), rep(2, 25)),
a=1,
sigma_0_sq=1,
sigma_theta=1,
sigma_mu=1)
rstan::sampling(model2, data)
```
# Logistic regression
Model:
$$
Y_i \sim \text{Ber}(p(x_i)), \quad p(x_i) = \text{logit}^{-1}(\alpha + \beta x_i) = \frac{\exp(\alpha + \beta x_i)}{1 + \exp(\alpha + \beta x_i)}
$$
```{stan output.var = "model", cache=TRUE}
data {
int<lower=0> n; // Number of observations
int<lower=0, upper=1> Y[n]; // All observations Y
vector[n] x;
}
parameters {
real alpha;
real beta;
}
model {
// Flat priors for alpha and beta.
for (i in 1:n) {
Y[i] ~ bernoulli_logit(alpha + beta * x[i]);
}
}
```
```{r}
y <- c(1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0)
x <- c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79,
81)
data = list(n=length(y),
Y=y,
x=x)
rstan::sampling(model, data, iter=10000, warmup=1000)
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment