Skip to content

Instantly share code, notes, and snippets.

:shipit:
Learning bio brb

Cameron Davidson-Pilon CamDavidsonPilon

View GitHub Profile
View delta_method.md

The Delta Method and Autograd

One of the reasons I'm really excited about autograd is because it enables me to be able to transform my abstract parameters into business-logic. Let me explain with an example. Suppose I am modeling customer churn, and I have fitted a Weibull survival model using maximum likelihood estimation. I have two parameter estimates: lambda-hat and rho-hat. I also have their covariance matrix, which tells me how much uncertainty is present in the estimates (in lifelines, this is under the variance_matrix_ property. From this, I can plot the survival curve, which is fine, but what I really want is a measure of lifetime value. Customers give us $10 each time period for the first 3 time periods, and then $30 a month afterwards. For a single user, the average LTV (up to timeline) calculation might look like:

# create a Weibull model with fake data
wf = WeibullFitter().fit(np.arange(1, 100))

from autograd import numpy as np
View piecewise_exp.py
import numpy as np
def piecewise_exponential_survival_data(n, breakpoints, lambdas):
"""
Examples
--------
>>> T = piecewise_exponential_survival_data(100000, [1, 3], [0.2, 3, 1.])
>>> NelsonAalenFitter().fit(T).plot()
View fitting.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View gist:1f59682876c47266df8467cf7a0423d4
for _ in range(10):
v = np.random.uniform(-10, 10, size=2)
print(check_grad(_negative_log_likelihood, gradient_function, v, log(T), E), v)
"""
0.4712740782685882 [-6.56058891 5.14902935]
3237.6524450846837 [ 9.08538372 -1.41142257]
View perf_results.csv
N frac batch single ratio
432 0.01 0.17586684226989746 0.24918293952941895 0.7057740092560965
432 0.09909090909090909 0.139909029006958 0.18905401229858398 0.7400479223154045
432 0.1881818181818182 0.18714380264282227 0.19823789596557617 0.9440364655369406
432 0.2772727272727273 0.15966224670410156 0.1492321491241455 1.0698917601949116
432 0.3663636363636364 0.1783008575439453 0.2960469722747803 0.6022721873286135
432 0.4554545454545455 0.2235269546508789 0.2482771873474121 0.9003120948768426
432 0.5445454545454546 0.2755610942840576 0.27878808975219727 0.9884249163190293
432 0.6336363636363637 0.2598390579223633 0.19139719009399414 1.3575907660648399
432 0.7227272727272728 0.28580784797668457 0.19529199600219727 1.4634898194878856
View cox_performance_batch_single.py
from time import time
import pandas as pd
import numpy as np
from lifelines.datasets import load_rossi
from lifelines import CoxPHFitter
# This compares the batch algorithm (in CTV) vs the single iteration algorithm (original in CPH)
# N vs (% ties == unique(T) / N)
View lex_loader.py
import csv
from collections import defaultdict
d = defaultdict(dict)
with open('lexicon_3.csv') as dict_:
csv_reader=csv.reader(dict_, delimiter=',')
for row in csv_reader:
d[row[0]][row[1]] = int(row[2])
View five_thirty_eight.py
def falling_factorial(k, n):
product = 1
counter = k
while counter > n:
product *= counter
counter -= 1
return product
f = lambda k: 2 * falling_factorial(k, k-100) - k**100
View fixed_concordance.py
def concordance_index(event_times, predicted_scores, event_observed=None):
"""
Calculates the concordance index (C-index) between two series
of event times. The first is the real survival times from
the experimental data, and the other is the predicted survival
times from a model of some kind.
The concordance index is a value between 0 and 1 where,
0.5 is the expected result from random predictions,
1.0 is perfect concordance and,
View s_l2.py
import numpy as np
from numpy.linalg import matrix_power
from matplotlib import pyplot as plt
import seaborn as sns
SIZE = 100
M = np.zeros((SIZE, SIZE))
# encoding rolls of die
for y in xrange(SIZE):
You can’t perform that action at this time.