Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Estimating entropy and mutual information with scikit-learn: visit https://github.com/mutualinfo/mutual_info
'''
Non-parametric computation of entropy and mutual-information
Adapted by G Varoquaux for code created by R Brette, itself
from several papers (see in the code).
This code is maintained at https://github.com/mutualinfo/mutual_info
Please download the latest code there, to have improvements and
bug fixes.
These computations rely on nearest-neighbor statistics
'''
import numpy as np
from scipy.special import gamma,psi
from scipy import ndimage
from scipy.linalg import det
from numpy import pi
from sklearn.neighbors import NearestNeighbors
__all__=['entropy', 'mutual_information', 'entropy_gaussian']
EPS = np.finfo(float).eps
def nearest_distances(X, k=1):
'''
X = array(N,M)
N = number of points
M = number of dimensions
returns the distance to the kth nearest neighbor for every point in X
'''
knn = NearestNeighbors(n_neighbors=k + 1)
knn.fit(X)
d, _ = knn.kneighbors(X) # the first nearest neighbor is itself
return d[:, -1] # returns the distance to the kth nearest neighbor
def entropy_gaussian(C):
'''
Entropy of a gaussian variable with covariance matrix C
'''
if np.isscalar(C): # C is the variance
return .5*(1 + np.log(2*pi)) + .5*np.log(C)
else:
n = C.shape[0] # dimension
return .5*n*(1 + np.log(2*pi)) + .5*np.log(abs(det(C)))
def entropy(X, k=1):
''' Returns the entropy of the X.
Parameters
===========
X : array-like, shape (n_samples, n_features)
The data the entropy of which is computed
k : int, optional
number of nearest neighbors for density estimation
Notes
======
Kozachenko, L. F. & Leonenko, N. N. 1987 Sample estimate of entropy
of a random vector. Probl. Inf. Transm. 23, 95-101.
See also: Evans, D. 2008 A computationally efficient estimator for
mutual information, Proc. R. Soc. A 464 (2093), 1203-1215.
and:
Kraskov A, Stogbauer H, Grassberger P. (2004). Estimating mutual
information. Phys Rev E 69(6 Pt 2):066138.
'''
# Distance to kth nearest neighbor
r = nearest_distances(X, k) # squared distances
n, d = X.shape
volume_unit_ball = (pi**(.5*d)) / gamma(.5*d + 1)
'''
F. Perez-Cruz, (2008). Estimation of Information Theoretic Measures
for Continuous Random Variables. Advances in Neural Information
Processing Systems 21 (NIPS). Vancouver (Canada), December.
return d*mean(log(r))+log(volume_unit_ball)+log(n-1)-log(k)
'''
return (d*np.mean(np.log(r + np.finfo(X.dtype).eps))
+ np.log(volume_unit_ball) + psi(n) - psi(k))
def mutual_information(variables, k=1):
'''
Returns the mutual information between any number of variables.
Each variable is a matrix X = array(n_samples, n_features)
where
n = number of samples
dx,dy = number of dimensions
Optionally, the following keyword argument can be specified:
k = number of nearest neighbors for density estimation
Example: mutual_information((X, Y)), mutual_information((X, Y, Z), k=5)
'''
if len(variables) < 2:
raise AttributeError(
"Mutual information must involve at least 2 variables")
all_vars = np.hstack(variables)
return (sum([entropy(X, k=k) for X in variables])
- entropy(all_vars, k=k))
def mutual_information_2d(x, y, sigma=1, normalized=False):
"""
Computes (normalized) mutual information between two 1D variate from a
joint histogram.
Parameters
----------
x : 1D array
first variable
y : 1D array
second variable
sigma: float
sigma for Gaussian smoothing of the joint histogram
Returns
-------
nmi: float
the computed similariy measure
"""
bins = (256, 256)
jh = np.histogram2d(x, y, bins=bins)[0]
# smooth the jh with a gaussian filter of given sigma
ndimage.gaussian_filter(jh, sigma=sigma, mode='constant',
output=jh)
# compute marginal histograms
jh = jh + EPS
sh = np.sum(jh)
jh = jh / sh
s1 = np.sum(jh, axis=0).reshape((-1, jh.shape[0]))
s2 = np.sum(jh, axis=1).reshape((jh.shape[1], -1))
# Normalised Mutual Information of:
# Studholme, jhill & jhawkes (1998).
# "A normalized entropy measure of 3-D medical image alignment".
# in Proc. Medical Imaging 1998, vol. 3338, San Diego, CA, pp. 132-143.
if normalized:
mi = ((np.sum(s1 * np.log(s1)) + np.sum(s2 * np.log(s2)))
/ np.sum(jh * np.log(jh))) - 1
else:
mi = ( np.sum(jh * np.log(jh)) - np.sum(s1 * np.log(s1))
- np.sum(s2 * np.log(s2)))
return mi
###############################################################################
# Tests
def test_entropy():
# Testing against correlated Gaussian variables
# (analytical results are known)
# Entropy of a 3-dimensional gaussian variable
rng = np.random.RandomState(0)
n = 50000
d = 3
P = np.array([[1, 0, 0], [0, 1, .5], [0, 0, 1]])
C = np.dot(P, P.T)
Y = rng.randn(d, n)
X = np.dot(P, Y)
H_th = entropy_gaussian(C)
H_est = entropy(X.T, k=5)
# Our estimated entropy should always be less that the actual one
# (entropy estimation undershoots) but not too much
np.testing.assert_array_less(H_est, H_th)
np.testing.assert_array_less(.9*H_th, H_est)
def test_mutual_information():
# Mutual information between two correlated gaussian variables
# Entropy of a 2-dimensional gaussian variable
n = 50000
rng = np.random.RandomState(0)
#P = np.random.randn(2, 2)
P = np.array([[1, 0], [0.5, 1]])
C = np.dot(P, P.T)
U = rng.randn(2, n)
Z = np.dot(P, U).T
X = Z[:, 0]
X = X.reshape(len(X), 1)
Y = Z[:, 1]
Y = Y.reshape(len(Y), 1)
# in bits
MI_est = mutual_information((X, Y), k=5)
MI_th = (entropy_gaussian(C[0, 0])
+ entropy_gaussian(C[1, 1])
- entropy_gaussian(C)
)
# Our estimator should undershoot once again: it will undershoot more
# for the 2D estimation that for the 1D estimation
print((MI_est, MI_th))
np.testing.assert_array_less(MI_est, MI_th)
np.testing.assert_array_less(MI_th, MI_est + .3)
def test_degenerate():
# Test that our estimators are well-behaved with regards to
# degenerate solutions
rng = np.random.RandomState(0)
x = rng.randn(50000)
X = np.c_[x, x]
assert np.isfinite(entropy(X))
assert np.isfinite(mutual_information((x[:, np.newaxis],
x[:, np.newaxis])))
assert 2.9 < mutual_information_2d(x, x) < 3.1
def test_mutual_information_2d():
# Mutual information between two correlated gaussian variables
# Entropy of a 2-dimensional gaussian variable
n = 50000
rng = np.random.RandomState(0)
#P = np.random.randn(2, 2)
P = np.array([[1, 0], [.9, .1]])
C = np.dot(P, P.T)
U = rng.randn(2, n)
Z = np.dot(P, U).T
X = Z[:, 0]
X = X.reshape(len(X), 1)
Y = Z[:, 1]
Y = Y.reshape(len(Y), 1)
# in bits
MI_est = mutual_information_2d(X.ravel(), Y.ravel())
MI_th = (entropy_gaussian(C[0, 0])
+ entropy_gaussian(C[1, 1])
- entropy_gaussian(C)
)
print((MI_est, MI_th))
# Our estimator should undershoot once again: it will undershoot more
# for the 2D estimation that for the 1D estimation
np.testing.assert_array_less(MI_est, MI_th)
np.testing.assert_array_less(MI_th, MI_est + .2)
if __name__ == '__main__':
# Run our tests
test_entropy()
test_mutual_information()
test_degenerate()
test_mutual_information_2d()
@danielhomola

This comment has been minimized.

Copy link

@danielhomola danielhomola commented Jan 15, 2016

Hi Gael,

Thanks a lot for these functions! When I run the test_mutual_information() I get:
-0.136308887598 and 0.111571775657
I get it that the estimator should undershoot, but can MI be negative? I'm trying to use this function to implement the Joint Mutual Information feature selection method:
Data Visualization and Feature Selection: New Algorithms for Nongaussian Data
H. Yang and J. Moody, NIPS (1999)

This method performed best out of many information theoretic filter methods:
Conditional Likelihood Maximisation: A Unifying Framework for Information
Theoretic Feature Selection G. Brown, A. Pocock, M.-J.Zhao, M. Lujan
Journal of Machine Learning Research, 13:27-66 (2012)

C Implementation: https://github.com/Craigacp/FEAST/blob/master/JMI.c

When I'm trying to estimate the joint mutual information of two features with y, so I(X1, X2; Y), where X1 and X2 are two columns of X, and Y is the response variable, I get a positive value only for k=1, but as soon as I increase the size of the neighborhood the MI goes negative.. I'm only using the MI measure to rank features relative to each other, so if this method can be trusted, I don't really care if it's negative.. What do you think?

Thanks a lot,
Daniel

@gbellesia

This comment has been minimized.

Copy link

@gbellesia gbellesia commented Jun 16, 2016

Thank you for your work. Just one quick comment about the nearest_distances function.
Shouldn't line 31 be "knn = NearestNeighbors(n_neighbors=k+1)" ?

--g

@tagomatech

This comment has been minimized.

Copy link

@tagomatech tagomatech commented Feb 13, 2018

I used GaelVaroquaux's code above with the boston data, computing mutual information between each feature and the target, see here.
I get negative mutual information which is surely wrong.
What's wrong with my implementation? Can you help please?

@saverymax

This comment has been minimized.

Copy link

@saverymax saverymax commented Oct 25, 2018

I noticed two people mentioned computing negative mutual information. I haven't taken a close look at this implementation or the papers referenced, but A Study on Mutual Information-based Feature Selection for Text Categorization @ https://core.ac.uk/download/pdf/11310215.pdf addresses a potential issue.

@JaeDukSeo

This comment has been minimized.

Copy link

@JaeDukSeo JaeDukSeo commented Nov 3, 2018

Reading comments, using this function seems a bit worrying negative MI ?

@eKoulier

This comment has been minimized.

Copy link

@eKoulier eKoulier commented Jan 24, 2019

If I use k = [1,2,3,4,5,6] and then I test mutual_info_2d(k,k), I get as a result: 1.7456376329342476.
Shouldn't it be 1 when I test the mutual information between two identical vectors?

@wollmanlab

This comment has been minimized.

Copy link

@wollmanlab wollmanlab commented Feb 15, 2019

Estimates of mutual information can result in negative mutual informations due to sampling errors, and potential violation in the assumption of density estimation using neighbor information. The assumption being that sample rate is high enough for point density to be locally uniform around each point.

One approach to overcome this is to bootstrap estimate the effect of the bias by estimating mutual information from multiple sample sizes and extrapolating to an infinite sample size.

@JaeDukSeo

This comment has been minimized.

Copy link

@JaeDukSeo JaeDukSeo commented Feb 20, 2019

@eKoulier

If I use k = [1,2,3,4,5,6] and then I test mutual_info_2d(k,k), I get as a result: 1.7456376329342476.
Shouldn't it be 1 when I test the mutual information between two identical vectors?

MI - > does not have to be 1 if they are the same random variables - >
image

When we compare MI between the same image - > MI is not 1

@LuCeHe

This comment has been minimized.

Copy link

@LuCeHe LuCeHe commented Feb 21, 2019

What if I want the MI of two sets that don't have the same amount of samples? like X.shape = (30, 10) and Y.shape = (1,10). My impression is that the np.hstack in line 103 is not enough.

Cool code btw.

@Striderl

This comment has been minimized.

Copy link

@Striderl Striderl commented Jul 26, 2019

Thank you for your work. Just one quick comment about the nearest_distances function.
Shouldn't line 31 be "knn = NearestNeighbors(n_neighbors=k+1)" ?

--g

I agree. according to the original paper, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.422.5607&rep=rep1&type=pdf, this should calculate the Euclidean distances to the k-nn of xi in X \xi.

@GaelVaroquaux

This comment has been minimized.

Copy link
Owner Author

@GaelVaroquaux GaelVaroquaux commented Jul 30, 2019

What if I want the MI of two sets that don't have the same amount of samples? like X.shape = (30, 10) and Y.shape = (1,10). My impression is that the np.hstack in line 103 is not enough.

I don't really see how it would be possible to compute non-parametric mutual information between two sets where there is no correspondence between the samples. By construction to estimate MI one needs this correspondence.

@GaelVaroquaux

This comment has been minimized.

Copy link
Owner Author

@GaelVaroquaux GaelVaroquaux commented Jul 30, 2019

Shouldn't line 31 be "knn = NearestNeighbors(n_neighbors=k+1)" ?

I agree that using n_neighbors=k gives the k-1 nearest neighbor, as the first is the point itself. The code was wrong. Maybe that error was introduced in an evolution of scikit-learn. I've fixed it.

@Sandy4321

This comment has been minimized.

Copy link

@Sandy4321 Sandy4321 commented Nov 4, 2019

Hello friends
can be similarity for categorical numbers used as mutual information
per
https://towardsdatascience.com/the-search-for-categorical-correlation-a1cf7f1888c9
https://github.com/shakedzy/dython
https://en.wikipedia.org/wiki/Uncertainty_coefficient
https://stackoverflow.com/questions/20892799/using-pandas-calculate-cram%C3%A9rs-coefficient-matrix
https://stackoverflow.com/questions/46498455/categorical-features-correlation/46498792#46498792

thanks
PS
it would be very kind of you to share some links to understand
why
https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.mutual_info_classif.html#sklearn.feature_selection.mutual_info_classif
Kozachenko, N. N. Leonenko,
is good for feature selection
why it impossible just calculate similarity/mutual information for each feature and target?
Then if similarity/mutual information for given feature and target is high then this feature is good to use??

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Jan 22, 2020

I've noticed that scaling samples changes the output. I suspect this is a small bug in the scaling somewhere. Will post back if I trace it down.

Compare the following:

In [126]: np.random.seed(12)
     ...: for i in range(20):
     ...:     print(mi.mutual_information([np.random.randn(100, 1), 1 * np.random.randn(100, 1)], k=10))
     ...:
-0.020132950207217615
0.0007764531823655219
-0.10097362008313171
0.008234538277922088
-0.019989602665604345
-0.07712501107587677
0.02317718497197019
-0.01624389693603323
-0.037344436692854366
0.04950941718866808
-0.05049871422680052
-0.028624985257037494
-0.004130417096971595
-0.09652205195754915
0.07427403221566875
-0.0040393263534936885
-0.058616537991666995
0.09200872016468775
0.04512428420750236
-0.07361872725115903

In [127]: np.random.seed(12)
     ...: for i in range(20):
     ...:     print(mi.mutual_information([np.random.randn(100, 1), 100 * np.random.randn(100, 1)], k=10))
     ...:
-2.0021553153700378
-1.7289150378782132
-1.938608386083576
-1.999900260449417
-1.7849647677622498
-1.8527331847233786
-2.0258220171874264
-2.00130782835749
-2.017433687721022
-1.8716648526808015
-2.159052646410464
-2.090519859781173
-2.1565500325401965
-1.9789886099442322
-1.9400487388101713
-2.004367515829771
-1.9796955922060349
-2.016857734129518
-2.1187142194709843
-1.9679714350429087

UPDATE: I think this is just the result of using a differential entropy throughout. Probably (not sure yet) but using some copula style normalization (see pd.rank for example) is a decent way of getting something invariant to some transformations. Might destroy some structure depending on the use case though.

Also, there are few forks kicking around now. I think I've got one that fixed some small issue with the case of mutual information with itself.

https://gist.github.com/cottrell/770b811c03c846386b19e9c4bd18772f/revisions

I'm wondering if we should try to coral everyone together and make a repo. Viewing diffs is a little less convenient from the gist. Though I like the single discussion thread.

@GaelVaroquaux

This comment has been minimized.

Copy link
Owner Author

@GaelVaroquaux GaelVaroquaux commented Jan 23, 2020

@AbdolvakilFazli

This comment has been minimized.

Copy link

@AbdolvakilFazli AbdolvakilFazli commented Apr 17, 2020

Thank you for the work,

Assume that I have a node with some features(f1,f2,f3). And another nodes from features (f1',f2',f3'). how can I find the mutual information of the two?

@Sandy4321

This comment has been minimized.

Copy link

@Sandy4321 Sandy4321 commented Apr 19, 2020

Good question
Really, how it will be?

@maajdl

This comment has been minimized.

Copy link

@maajdl maajdl commented Apr 22, 2020

If I use k = [1,2,3,4,5,6] and then I test mutual_info_2d(k,k), I get as a result: 1.7456376329342476.
Shouldn't it be 1 when I test the mutual information between two identical vectors?

I would assume the natural result would be -6*(1/6)*log(1/6)=log(6) = 1.79175946923 .
This is assuming each value has the same probability of 1/6 .
Looks good given the that it is an estimation.

@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Feb 2, 2021

I noted two errors in this code (based on the reference paper from Krasov), and this also fixed the negative MI values I was getting.

  1. volume_unit_ball is missing a small expression. It should be : (pi**(d/2)/gamma(d/2 +1)/2**d)
  2. volume_unit_ball values for every variable need to be multiplied to each other before calculating the log. So line 75, volume_unit_ball = (pi**(.5d)) / gamma(.5d + 1) should actually be replaced by something like this:

volume_unit_ball = 1
for dim in range(1, d+1):
volume_unit_ball = volume_unit_ball * (pi**(dim/2)/gamma(dim/2 +1)/2**dim)

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 5, 2021

@thismartion I haven't yet found it in another library where it should probably sit. Have created this and added you and @GaelVaroquaux . You should have admin privileges if I did it right.

@GaelVaroquaux

This comment has been minimized.

Copy link
Owner Author

@GaelVaroquaux GaelVaroquaux commented Feb 5, 2021

Creating a library for collective maintenance was really the right move. I'll update the gist to point there.

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 5, 2021

volume_unit_ball = 1
for dim in range(1, d+1):
volume_unit_ball = volume_unit_ball * (pi**(dim/2)/gamma(dim/2 +1)/2**dim)

I've copy pasta'd your change into this branch. https://github.com/mutualinfo/mutual_info/tree/thismartian

There is some test failure but I haven't looked at it yet. I suspect if you still have it in your head it is an easy test change. I'm not ever sure what is correct since it's been a few months since I've used this stuff.

@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Feb 6, 2021

Hi, cottrell. Thanks a lot for this. I figured out what the problem is. The distance used to calculate the entropy should be 2x the distance to the nearest neighbor. Not sure I'm doing it right but I don't seem to have the permission to make changes to the file, perhaps you could try this: in the entropy function: return d * np.mean(np.log(2*r + np.finfo(X.dtype).eps)) + np.log(volume_unit_ball) + psi(n) - psi(k)

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 6, 2021

Hi, cottrell. Thanks a lot for this. I figured out what the problem is. The distance used to calculate the entropy should be 2x the distance to the nearest neighbor. Not sure I'm doing it right but I don't seem to have the permission to make changes to the file, perhaps you could try this: in the entropy function: return d * np.mean(np.log(2*r + np.finfo(X.dtype).eps)) + np.log(volume_unit_ball) + psi(n) - psi(k)

Hmmm, I can past it in and try but let's try to fix the permissions. Having a look now ...

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 6, 2021

Ok, I think I've now added @thismartian and @GaelVaroquaux as admins on the org and the repo now. For some reason it only sets you up as members initially when I punched them in. Let me know if you can't access it.

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 6, 2021

@thismartian The tests now pass after I pasted the change in. It's still just on that branch if you want to review it. If there is an obvious test that would have failed before your change and passed after, maybe we can add that?

@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Feb 9, 2021

@cottrell the changes look fine. I think the entropy test was failing (as expected). Is that what you mean? Sorry it's not clear to me...

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 10, 2021

@cottrell the changes look fine. I think the entropy test was failing (as expected). Is that what you mean? Sorry it's not clear to me...

No I mean before the change, no tests were failing but you noticed something was wrong. What is a missing test that would have shown something was wrong?

@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Feb 11, 2021

Ah, for me the main indication was just that for some random combinations the mutual info was negative (as you had pointed out earlier too). I think in the test itself, if you change the covariance matrix to unit variance, it shows that the test failed. Honestly, I am not sure I understand how the covariance matrix is generated in the tests (I need to read about it). What I would do is: define a correlation coefficient (r), standard deviation (s), and based on these two it's possible to have a unit variance, zero mean covariance matrix:

`

s = 0.6
r = 0.9

C= np.array([[0.5*s**2, r*0.5*s**2], [r*0.5*s**2, 0.5*s**2]]) 

mean = [0,0]

x,y = np.random.multivariate_normal(mean, C, size = n).T

X = x.reshape(len(x), 1)

Y = y.reshape(len(y), 1) `
@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Feb 11, 2021

I was probably not clear, let me try to rephrase:

  • In the mutual info test written by @GaelVaroquaux, the covariance matrix does not have a unit variance. If you change it to a unit variance matrix, the test fails. The gaussian reference used in the paper is based on a zero mean, unit variance covariance matrix.
  • I don't have an expertise in statistics so I don't really understand the approach used in the code to generate the correlated XY pair. I have instead generated the XY pair based on a numpy function (previous comment), which I understand better.
@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Feb 12, 2021

Thanks @thismartian I will take a look a try to jam a test in shortly.

@thismartian

This comment has been minimized.

Copy link

@thismartian thismartian commented Mar 3, 2021

@cottrell

This comment has been minimized.

Copy link

@cottrell cottrell commented Mar 3, 2021

@thismartian Yeah I can't remember where I left it. I think I put some new tests in and have a branch there. I think I was looking at differential entropy in general and how it sort of has a dimensionality issue. The branch should run and we could just merge. Likely I'll dig into it more next time I am looking for entropy of continuous random variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment