Skip to content

Instantly share code, notes, and snippets.

Last active July 19, 2021 11:55
What would you like to do?
Implementation of Dunn's multiple comparison test, following a Kruskal-Wallis 1-way ANOVA
import numpy as np
from scipy import stats
from itertools import combinations
from statsmodels.stats.multitest import multipletests
from statsmodels.stats.libqsturng import psturng
import warnings
def kw_dunn(groups, to_compare=None, alpha=0.05, method='bonf'):
Kruskal-Wallis 1-way ANOVA with Dunn's multiple comparison test
groups: sequence
arrays corresponding to k mutually independent samples from
continuous populations
to_compare: sequence
tuples specifying the indices of pairs of groups to compare, e.g.
[(0, 1), (0, 2)] would compare group 0 with 1 & 2. by default, all
possible pairwise comparisons between groups are performed.
alpha: float
family-wise error rate used for correcting for multiple comparisons
(see statsmodels.stats.multitest.multipletests for details)
method: string
method used to adjust p-values to account for multiple corrections (see
statsmodels.stats.multitest.multipletests for options)
H: float
Kruskal-Wallis H-statistic
p_omnibus: float
p-value corresponding to the global null hypothesis that the medians of
the groups are all equal
Z_pairs: float array
Z-scores computed for the absolute difference in mean ranks for each
pairwise comparison
p_corrected: float array
corrected p-values for each pairwise comparison, corresponding to the
null hypothesis that the pair of groups has equal medians. note that
these are only meaningful if the global null hypothesis is rejected.
reject: bool array
True for pairs where the null hypothesis can be rejected for the given
Gibbons, J. D., & Chakraborti, S. (2011). Nonparametric Statistical
Inference (5th ed., pp. 353-357). Boca Raton, FL: Chapman & Hall.
# omnibus test (K-W ANOVA)
# -------------------------------------------------------------------------
groups = [np.array(gg) for gg in groups]
k = len(groups)
n = np.array([len(gg) for gg in groups])
if np.any(n < 5):
warnings.warn("Sample sizes < 5 are not recommended (K-W test assumes "
"a chi square distribution)")
allgroups = np.concatenate(groups)
N = len(allgroups)
ranked = stats.rankdata(allgroups)
# correction factor for ties
T = stats.tiecorrect(ranked)
if T == 0:
raise ValueError('All numbers are identical in kruskal')
# sum of ranks for each group
j = np.insert(np.cumsum(n), 0, 0)
R = np.empty(k, dtype=np.float)
for ii in range(k):
R[ii] = ranked[j[ii]:j[ii + 1]].sum()
# the Kruskal-Wallis H-statistic
H = (12. / (N * (N + 1.))) * ((R ** 2.) / n).sum() - 3 * (N + 1)
# apply correction factor for ties
H /= T
df_omnibus = k - 1
p_omnibus = stats.chisqprob(H, df_omnibus)
# multiple comparisons
# -------------------------------------------------------------------------
# by default we compare every possible pair of groups
if to_compare is None:
to_compare = tuple(combinations(range(k), 2))
ncomp = len(to_compare)
Z_pairs = np.empty(ncomp, dtype=np.float)
p_uncorrected = np.empty(ncomp, dtype=np.float)
Rmean = R / n
for pp, (ii, jj) in enumerate(to_compare):
# standardized score
Zij = (np.abs(Rmean[ii] - Rmean[jj]) /
np.sqrt((1. / 12.) * N * (N + 1) * (1. / n[ii] + 1. / n[jj])))
Z_pairs[pp] = Zij
# corresponding p-values obtained from upper quantiles of the standard
# normal distribution
p_uncorrected = stats.norm.sf(Z_pairs) * 2.
# correction for multiple comparisons
reject, p_corrected, alphac_sidak, alphac_bonf = multipletests(
p_uncorrected, method=method
return H, p_omnibus, Z_pairs, p_corrected, reject
Copy link

Hi, can you please give an example for groups?

Copy link

It seems as though groups is a two-dimensional list of samples e.g. [[1,2,3,4,5], [4,5,6,7,8,9]].

Copy link

I think there might be an addition needed, although I could be completely wrong:

        # standardized score
        ts3_ts = list(np.unique(allgroups, return_counts=True)[1])
        E_ts3_ts = sum([x**3 - x for x in ts3_ts if x>1])

        if sum([x>1 for x in ts3_ts]) > 0:
            warnings.warn("We see ties.")

            yi = np.abs(Rmean[ii] - Rmean[jj])
            theta10 = (N * (N + 1)) / 12
            theta11 =  E_ts3_ts / ( 12* (N - 1) )
            theta2 = (1 / n[ii] + 1 / n[jj])
            theta = np.sqrt( (theta10 - theta11) * theta2 )
            Zij = yi / theta
            Zij = (np.abs(Rmean[ii] - Rmean[jj]) /
                   np.sqrt((1. / 12.) * N * (N + 1) * (1. / n[ii] + 1. / n[jj])))

Copy link

@jazon33y You are correct. After adding your snippet, the output is now equal to that of R when there are ties. Thanks!

Copy link

whokan commented Sep 6, 2017

What order do the post host arrays output in? From my own testing I think for three groups it's (1 vs 2), (1 vs 3), (2 vs 3)?

Copy link

ayesop commented Nov 20, 2017

stats.chisqprob is deprecated in scipy 0.17.0; use stats.distributions.chi2.sf
p_omnibus = stats.distributions.chi2.sf(H, df_omnibus)

The format of 'groups' is still unclear; I get a "too many values to unpack" error when I use an array of samples as indicated.

Copy link

stuboo commented Jan 10, 2018

@whokan I think your assumption is correct. I just tested 7 groups and this was the return order:

[(0,1), (0,2), (0,3), (0,4), (0,5), (0,6), (1,2), (1,3), (1,4), (1,5), (1,6), (2,3), (2,4), (2,5), (2,6), (3,4), (3,5), (3,6), (4,5), (4,6), (5,6)]

Copy link

eturkes commented Jan 16, 2018

Hello there @alimuldal I would like to include your code in a GPLv3 project. Your function does not seem to be licensed, so I'd like to know if I have permission to use and modify it. And if so, how would you like me to credit you? Would this be OK?

# The following section is adapted from code written by Alistair Muldal.
# Permission for use and modification has been granted by the author.
# Source:
# Retrieved: 2018.10.12 UTC
# START adapted code
your python code....
# END adapted code

Copy link

@jazon33y thanks for the correction (now if you have k=2 samples the p-value is consistent with K-W test as it should). FYI for Python 2.x users that don't future import division, you will get Inf for Z-scores (and corresponding p-values of 0) because of the integer division for theta2.

Copy link

ghost commented Mar 19, 2018

Hi all,
did somebody tried the script with 3 groups or more? My p-values are wrong (compared with R, and I checked the same adjustment method was applied).
If it does work with you can you please share the script?

Copy link

glickmac commented Nov 30, 2018

If you are getting an error with 'chisqprob' (see so question here) the Bug Fix is Here via @VincentLa14

from scipy import stats
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment