Skip to content

Instantly share code, notes, and snippets.

Learning bio brb

Cameron Davidson-Pilon CamDavidsonPilon

Learning bio brb
View GitHub Profile
View fivethirtyeight.csv
votes scaled_prob
64.02877697841728 0
79.85611510791367 0
93.52517985611513 0
107.19006190396522 0.0
120.14388489208633 0
135.97122302158274 0
151.07913669064752 0
159.70386481512466 0.011627906976744207
173.36038146227202 0.02906976744186074
CamDavidsonPilon / nodered.json
Last active Jun 30, 2020
Part of blog post
View nodered.json
"id": "328cb8d0.ad53d8",
"type": "tab",
"label": "Flow 1",
"disabled": false,
"info": ""
"id": "e064c4c9.ed0b18",
import time
import board
import adafruit_dht
from paho.mqtt import publish
# Initial the dht device, with data pin connected to:
dhtDevice = adafruit_dht.DHT22(board.D21)
while True:
View b_splines.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Our model is
log(cell/ml) = alpha * (secchi stick depth) + intercept + Noise
However, our cell/mL comes from a noisy hemocytometer, and our
depth measured with the secchi stick is also noisy. Our goal is
still to infer alpha & intercept as well as possible, so we can
dump the hemocytometer.
# test previous algorithm
actuals = pd.read_csv("")
DELTA = 0.1
def survival_function(t, lambda_=50., rho=1.5):
# Assume simple Weibull model
return np.exp(-(t/lambda_) ** rho)
I consider this a greedy algorithm, since at each time step, I ask which is a better "twist". I don't think it's optimal.
The idea is to estimate the probability of discovering tau in the next time step, given your current position and knowledge (position being left or right, denoted 1 and 2 here). We calculate the probability of discovering tau in the next time step as follows:
t1 is the max time observed in position 1, and t2 in position 2. Denote P the random variable of which position tau is in (1 or 2). Small p is our current position. Suppose we start in position 1, i.e. p=1
Pr(discover tau in next delta time| t1, t2, p=1) =
Pr(discover tau in next delta time| t1, t2, P=1, p=1) * Pr(P=1) +
CamDavidsonPilon /
Last active Aug 28, 2019
initial thoughts on jax
  • vmap is a very general function, but like einsum, I end up trying a bunch of permutations before it works the way I want. More documentation and examples, or higher order functions, would be helpful.

  • debugging is much more difficult than in autograd. Ex: tracking down NaNs is harder, and inspecting variables in jax is not possible?

  • It's as fast as advertised. jit is pretty impressive.

  • stax is a neat little sublibrary, I'd like to see more developer there, but I understand the possible scope-creep.

  • I love the idea of riding the upgrade train of Jax, XLA and GPUs.

  • I see a lot¹ of examples using internal Jax APIs and my code doesn't, so that gives me pause. Am I missing something, or are more higher order functions needed?

  • 🆕 Is vectorize the right API? I'm not sure. Perhaps some common patterns could be extracted into functions. I had a lot of trouble with trying to duplicate elementwise_grad in grad + vmap primitives. It was much easier in autograd. After reading h

View tanks!.py
MAX = 114
MIN = 22
def falling_factorial(n, k):
computes (n) * (n-1) * ... * (n-(k-1))
running_product = n
for p in range(n-1, n-k, -1):
running_product *= p
from zepid.base import create_spline_transform
from lifelines import CoxPHFitter
from lifelines.datasets import load_rossi
rossi = load_rossi()
rossi_with_splines = rossi.copy()
spline_transform, bp = create_spline_transform(rossi_with_splines['age'], term=3, restricted=False)
rossi_with_splines[['age0', 'age1', 'age2']] = pd.DataFrame(spline_transform(rossi_with_splines['age']))
rossi_with_splines = rossi_with_splines.drop('age', axis=1)
You can’t perform that action at this time.