Skip to content

Instantly share code, notes, and snippets.

@andyljones
Last active July 26, 2016 17:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andyljones/8fcf36bcdd3140079e41a514a41c387f to your computer and use it in GitHub Desktop.
Save andyljones/8fcf36bcdd3140079e41a514a41c387f to your computer and use it in GitHub Desktop.
import scipy as sp
from scipy.special import gammaln
def log_marginal(p, n, alpha=2):
"""Log-marginal probability of `p` positive trials and `n` negative trials from a
beta-binomial model with prior strength `alpha`. See
http://www.cs.ubc.ca/~murphyk/Teaching/CS340-Fall06/reading/bernoulli.pdf
for details.
"""
first = gammaln(alpha + p) + gammaln(alpha + n) - gammaln(alpha + alpha + p + n)
second = gammaln(alpha + alpha) - gammaln(alpha) - gammaln(alpha)
return first + second
# In 2016, there are 14 images showing busy checkpoints and 7 images showing quiet checkpoints
# (inc. the two from Feb 13th/14th)
# In 2015, there are 2 images showing busy checkpoints and 0 images showing quiet checkpoints.
# Now, first calculate the likelihood of the data if the probability of seeing a busy checkpoint was
# the same in 2015 as it is in 2016.
log_marginal_same = log_marginal(14 + 2, 7 + 0)
# Next, calculate the likelihood of the data if the probability of seeing a busy checkpoint is
# allowed to be different in 2015 to the probability of seeing one in 2016.
log_marginal_diff = log_marginal(14, 7) + log_marginal(2, 0)
# Now calculate the probability of each model (assuming that we've no prior preference for one
# or the other).
prob_same = sp.exp(log_marginal_same - sp.misc.logsumexp([log_marginal_diff, log_marginal_same]))
prob_diff = 1 - prob_same
# This comes out to ~60% prob that the 'same' model holds, and 40% prob that the 'diff' model
# holds. Which is to say, it's utterly inconclusive!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment