Skip to content

Instantly share code, notes, and snippets.

@jdmaturen
Created October 30, 2014 22:26
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jdmaturen/2995b2795c63825feec0 to your computer and use it in GitHub Desktop.
Save jdmaturen/2995b2795c63825feec0 to your computer and use it in GitHub Desktop.
Two methods of estimating confidence and error in NPS results. One uses the beta distribution as the conjugate prior to the Bernoulli distribution. The other uses the central limit theorem and standard error calculation. The latter can also correct for finite population size.
import math
import numpy as np
from scipy.stats import beta
def nps_beta_dist(sample_size, promoters, detractors, confidence=95):
"""
Confidence range of NPS score. NPS score is defined as the percent of promoters
minus the percent of detractors. See also http://en.wikipedia.org/wiki/Net_Promoter
For instance if 250 out of 1000 sampled respondents are promoters and 75 are detractors
then the (point estimate of the) NPS score is 25% - 7.5% = 17.5%. Probabilistically
we'd expect the NPS score to be between 14.3% — 20.5% with 95% confidence.
>>> np.allclose(nps_beta_dist(1000, 250, 75), (0.143, 0.205), atol=0.001)
True
"""
p_dist = beta(1 + promoters, 1 + sample_size - promoters)
d_dist = beta(1 + detractors, 1 + sample_size - detractors)
samples = [p_dist.rvs() - d_dist.rvs() for _ in xrange(10000)]
return np.percentile(samples, [50 - confidence/2., 50 + confidence/2.])
def finite_population_correction(population_size, sample_size):
"""
http://en.wikipedia.org/wiki/Standard_error#Correction_for_finite_population
>>> finite_population_correction(10000, 100)
0.9950371902099892
>>> finite_population_correction(10000, 1000)
0.9487307357732752
>>> finite_population_correction(10000, 10000)
0.0
"""
return math.sqrt((population_size - sample_size)/(population_size - 1.))
def nps_normal_margin_of_error(sample_size, promoters, detractors, population_size=None):
"""
Using the central limit theorem, calculate the margin of error from the point estimate and sample size.
One MoE is roughly a confidence of 2/3, while 2 MoEs are 95%, etc.
This method gets us a very similar answer to the nps_beta_dist method.
http://stats.stackexchange.com/questions/18603/how-can-i-calculate-margin-of-error-in-a-nps-net-promoter-score-result
>>> np.allclose(nps_normal_margin_of_error(324, 324/2, 324/6), (0.333, 0.041), atol=0.001)
True
>>> nps_normal_margin_of_error(1000, 250, 75)
(0.175, 0.01715735993677349)
>>> mean, moe = nps_normal_margin_of_error(1000, 250, 75)
>>> mean - moe * 2, mean + moe * 2
(0.140685280126453, 0.20931471987354697)
"""
correction = 1.
if population_size:
correction = finite_population_correction(population_size, sample_size)
nps = (-1. * detractors + promoters) / sample_size
p = promoters / float(sample_size)
d = detractors / float(sample_size)
n = 1 - p - d
variance = (1 - nps) ** 2 * p + (0 - nps) ** 2 * n + (-1 - nps) ** 2 * d
std_deviation = math.sqrt(variance)
margin_of_error = correction * std_deviation / math.sqrt(sample_size)
return nps, margin_of_error
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment