import numpy as np | |
import pdb | |
from sklearn.datasets import make_classification | |
from sklearn.mixture import GaussianMixture as GMM | |
def fisher_vector(xx, gmm): | |
"""Computes the Fisher vector on a set of descriptors. | |
Parameters | |
---------- | |
xx: array_like, shape (N, D) or (D, ) | |
The set of descriptors | |
gmm: instance of sklearn mixture.GMM object | |
Gauassian mixture model of the descriptors. | |
Returns | |
------- | |
fv: array_like, shape (K + 2 * D * K, ) | |
Fisher vector (derivatives with respect to the mixing weights, means | |
and variances) of the given descriptors. | |
Reference | |
--------- | |
J. Krapac, J. Verbeek, F. Jurie. Modeling Spatial Layout with Fisher | |
Vectors for Image Categorization. In ICCV, 2011. | |
http://hal.inria.fr/docs/00/61/94/03/PDF/final.r1.pdf | |
""" | |
xx = np.atleast_2d(xx) | |
N = xx.shape[0] | |
# Compute posterior probabilities. | |
Q = gmm.predict_proba(xx) # NxK | |
# Compute the sufficient statistics of descriptors. | |
Q_sum = np.sum(Q, 0)[:, np.newaxis] / N | |
Q_xx = np.dot(Q.T, xx) / N | |
Q_xx_2 = np.dot(Q.T, xx ** 2) / N | |
# Compute derivatives with respect to mixing weights, means and variances. | |
d_pi = Q_sum.squeeze() - gmm.weights_ | |
d_mu = Q_xx - Q_sum * gmm.means_ | |
d_sigma = ( | |
- Q_xx_2 | |
- Q_sum * gmm.means_ ** 2 | |
+ Q_sum * gmm.covariances_ | |
+ 2 * Q_xx * gmm.means_) | |
# Merge derivatives into a vector. | |
return np.hstack((d_pi, d_mu.flatten(), d_sigma.flatten())) | |
def main(): | |
# Short demo. | |
K = 64 | |
N = 1000 | |
xx, _ = make_classification(n_samples=N) | |
xx_tr, xx_te = xx[: -100], xx[-100: ] | |
gmm = GMM(n_components=K, covariance_type='diag') | |
gmm.fit(xx_tr) | |
fv = fisher_vector(xx_te, gmm) | |
pdb.set_trace() | |
if __name__ == '__main__': | |
main() |
This comment has been minimized.
This comment has been minimized.
@iakash-1326 - It is not wrong (at least not regarding what you mentioned). In your link, they only use the derivatives with respects to the means and sigmas. In the original paper they also use the derivatives with respects to the weights. In another paper they mentioned that the derivatives with respects to the weights dont contribute much Hence the difference. What currently bothers me is the sign of the derivative according to sigma. I am not sure its correct (it should by *(-1)) but I am still checking |
This comment has been minimized.
This comment has been minimized.
Sorry for the late reply – unfortunately, Gist doesn't trigger a notification when a comment is posted. @iakash2604 wrote:
As @sitzikbs already explained, my implementation includes the derivatives with respect to the weights, hence the additional K values. My code is based on the paper of (Krapac et al., 2011), equations 15–17. @iakash2604 wrote:
I believe the current implementation uses the correct sign (if you are not convinced I can go into more details). But I want to point out that compared to the paper, the derivatives with respect to the variances miss a scaling by 0.5. From a practical point of view, this scaling doesn't matter, since the Fisher vectors are standardized (zero mean, unit variance across each dimension) in order to approximate the transformation with inverse of the Fisher information matrix, see section 3.5 from (Krapac et al., 2011). This standardization step removes the scaling of each dimension. Also, do not forget to L2 and power normalize the Fisher vectors before feeding them into the classifier; it has been shown it visibly improves performance, see table 1 in (Sanchez et al., 2013). Here is a code example of how the Fisher vectors are prepared for classification by normalizing and building the corresponding kernel matrices. |
This comment has been minimized.
This comment has been minimized.
what is the Q sum? |
This comment has been minimized.
This comment has been minimized.
Hi @vanetoj
The matrix
To obtain Does this help? |
This comment has been minimized.
This comment has been minimized.
yes, thank you! |
This comment has been minimized.
This comment has been minimized.
hello , what are the equation numbers of dmu,d_pi ,d_sigma , in the paper mentioned? Becasue i could not understand how d_sigma equation is computed from paper. |
This comment has been minimized.
This comment has been minimized.
Hello @khizerali! The code implements equations 15–17 from (Krapac et al., 2011). Specifically,
Since
and by further expanding the terms we get
The code is a vectorized version of the above formula that also averages over the set of
I hope this is somewhat clear |
This comment has been minimized.
This comment has been minimized.
@danoneata ,thanks for your explanation . Now i understand how this equation is formed. But why inv(gmm.covars) is not used in d_mu equation according to equation 16? |
This comment has been minimized.
This comment has been minimized.
Yes, that's a valid observation, @khizerali! The reason is similar to what I've previously explained when motivating the missing 0.5 factor in |
This comment has been minimized.
This comment has been minimized.
Thanks for the implementation. I just realized there is no Fisher information matrix in your implementation. However, In the paper "Fisher Kernels on Visual Vocabularies for Image Categorization" authors mentioned: So isn't this degrading performance of FV if you are not using Fisher information matrix F? |
This comment has been minimized.
This comment has been minimized.
Hello @sobhanhemati You are right, my code doesn't include the normalization with the Fisher information matrix. In practice, we usually approximate this normalization by standardizing the Fisher vectors (scaling to zero mean and unit variance); the implementation will look something along these lines: from sklearn.preprocessing import StandardScaler
fvs = np.vstack([fisher_vector(get_descs(img), gmm) for img in imgs])
scaler = StandardScaler()
fvs = scaler.fit(fvs).transform(fvs) Standardizing the Fisher vectors corresponds to using a diagonal approximation of the sample covariance matrix of the Fisher vectors. For more information please check section 3.5 in (Krapac et al., 2011) and for an empirical evaluation of the performance see page 9 (approximate FIM vs. empirical FIM) in (Sanchez et al., 2013); the latter report the following accuracies for image classification (so the higher the values the better):
Hope this helps! P.S.: If the dimensionality of your data allows, you can also estimate the full sample covariance matrix (which is equivalent to whitening the Fisher vectors). |
This comment has been minimized.
This comment has been minimized.
Thank you for clarification. |
This comment has been minimized.
This comment has been minimized.
@sobhanhemati Equations (16–18) from (Sanchez et al., 2013) provide the Fisher vectors that include the analytical approximation; hence, you can modify the computation of # at line 43
s = np.sqrt(gmm.weights_)[:, np.newaxis]
d_pi = (Q_sum.squeeze() - gmm.weights_) / s.squeeze()
d_mu = (Q_xx - Q_sum * gmm.means_) * np.sqrt(gmm.covariances_) ** -1 / s
d_sigma = - (
- Q_xx_2
- Q_sum * gmm.means_ ** 2
+ Q_sum * gmm.covariances_
+ 2 * Q_xx * gmm.means_) / (s * np.sqrt(2)) Note that I haven't tested this implementation, so you might want to double check it. And I would suggest to try both methods for estimating the diagonal Fisher information matrix and see which one works better for you — Sanchez et al. mention in their paper:
Finally, do not forget to L2 and power normalise the Fisher vectors — these transformations yield much more substantial improvements (about 6-7% points each) than the choice of the approximation for the Fisher information matrix (see Table 1 from Sanchez et al.). |
This comment has been minimized.
This comment has been minimized.
Thank you so much for the comprehensive answer. |
This comment has been minimized.
thanks for the code.
I just have one doubt. based on the derivation of a fisher vector for an image (as given on this page here a fisher vector will have dimension of only 2KD and not K + 2 * D * K.
can you please look into it. i may be wrong.
thanks again