Skip to content

Instantly share code, notes, and snippets.

@aronwc
Last active April 30, 2024 06:54
Show Gist options
  • Star 42 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save aronwc/8248457 to your computer and use it in GitHub Desktop.
Save aronwc/8248457 to your computer and use it in GitHub Desktop.
Example using GenSim's LDA and sklearn
""" Example using GenSim's LDA and sklearn. """
import numpy as np
from gensim import matutils
from gensim.models.ldamodel import LdaModel
from sklearn import linear_model
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
def print_features(clf, vocab, n=10):
""" Print sorted list of non-zero features/weights. """
coef = clf.coef_[0]
print 'positive features: %s' % (' '.join(['%s/%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[::-1][:n] if coef[j] > 0]))
print 'negative features: %s' % (' '.join(['%s/%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[:n] if coef[j] < 0]))
def fit_classifier(X, y, C=0.1):
""" Fit L1 Logistic Regression classifier. """
# Smaller C means fewer features selected.
clf = linear_model.LogisticRegression(penalty='l1', C=C)
clf.fit(X, y)
return clf
def fit_lda(X, vocab, num_topics=5, passes=20):
""" Fit LDA from a scipy CSR matrix (X). """
print 'fitting lda...'
return LdaModel(matutils.Sparse2Corpus(X), num_topics=num_topics,
passes=passes,
id2word=dict([(i, s) for i, s in enumerate(vocab)]))
def print_topics(lda, vocab, n=10):
""" Print the top words for each topic. """
topics = lda.show_topics(topics=-1, topn=n, formatted=False)
for ti, topic in enumerate(topics):
print 'topic %d: %s' % (ti, ' '.join('%s/%.2f' % (t[1], t[0]) for t in topic))
if (__name__ == '__main__'):
# Load data.
rand = np.random.mtrand.RandomState(8675309)
cats = ['rec.sport.baseball', 'sci.crypt']
data = fetch_20newsgroups(subset='train',
categories=cats,
shuffle=True,
random_state=rand)
vec = CountVectorizer(min_df=10, stop_words='english')
X = vec.fit_transform(data.data)
vocab = vec.get_feature_names()
# Fit classifier.
clf = fit_classifier(X, data.target)
print_features(clf, vocab)
# Fit LDA.
lda = fit_lda(X, vocab)
print_topics(lda, vocab)
@aronwc
Copy link
Author

aronwc commented Jan 3, 2014

Output below. Topics may look better with more iterations.

positive features: clipper/1.49 code/1.23 key/1.04 encryption/0.95 government/0.37 nsa/0.37 chip/0.37 uk/0.36 org/0.23 cryptography/0.23
negative features: baseball/-1.33 game/-0.71 year/-0.61 team/-0.38 edu/-0.27 games/-0.27 players/-0.23 ball/-0.17 season/-0.14 phillies/-0.11
fitting lda...
topic 0: contains/0.06 correct/0.05 cf/0.03 er/0.02 awful/0.02 fenway/0.02 famous/0.02 digitized/0.02 general/0.01 162/0.01
topic 1: fri/0.02 d3/0.02 fresh/0.02 expansion/0.01 electronically/0.01 entire/0.01 bytes/0.01 allegheny/0.01 charge/0.01 creation/0.01
topic 2: cf/0.02 distributed/0.01 disagree/0.01 2ef221/0.01 able/0.00 571/0.00 brothers/0.00 bush/0.00 generate/0.00 equipment/0.00
topic 3: fri/0.02 d3/0.02 gold/0.02 close/0.01 17/0.01 funds/0.01 att/0.01 black/0.01 combination/0.01 exception/0.01
topic 4: biochem/0.03 authentication/0.01 11/0.01 generate/0.01 favorite/0.01 ebrandt/0.01 354/0.01 eric/0.01 convert/0.01 flame/0.01

@juanshishido
Copy link

This is so useful right now. Thanks.

Also, it's great to see someone else using the 8675309 seed!

@satomacoto
Copy link

Very helpful! Perhaps you should set documents_columns=False now.

 def fit_lda(X, vocab, num_topics=5, passes=20):
     """ Fit LDA from a scipy CSR matrix (X). """
     print 'fitting lda...'
-    return LdaModel(matutils.Sparse2Corpus(X), num_topics=num_topics,
+    return LdaModel(matutils.Sparse2Corpus(X, documents_columns=False), num_topics=num_topics,
                     passes=passes,
                     id2word=dict([(i, s) for i, s in enumerate(vocab)])) 

@X-Wei
Copy link

X-Wei commented May 26, 2015

in Sparse2Corpus should set documents_columns=False:
Sparse2Corpus(X, documents_columns=False)
else the columns of will be treated as documents (but in fact they should be features)

or just transpose: Sparse2Corpus(X.T)

@orsonadams
Copy link

Thanks for that @X-Wei, Transposing works

@vinnitu
Copy link

vinnitu commented Jan 5, 2018

how to predict class for new document?

@EricSchles
Copy link

@EricSchles
Copy link

@vinnitu - This is how you do it generally, I'm still trying to figure out how to do this for the feature engineering shown above: https://radimrehurek.com/gensim/models/ldamodel.html

@EricSchles
Copy link

@vinnitu - I figured it out! It turns out gensim assumes a feature engineering, usually bag of words. So as you can see in the above ldamodel - https://radimrehurek.com/gensim/models/ldamodel.html the bag of words representation is made use of to map from LDA category to the topic model. So the model isn't really aware of the full corpus, but instead is only area of the compressed version, aka the topics. So, you first have to do the feature engineering, converting the document to bag of words via the dictionary representation, which gensim explains how to do: https://radimrehurek.com/gensim/tut1.html

With this feature transformation in mind, it is possible to recover the categorization to categorize documents within the model. Then if you want to add new topics, you need to rerun the LDA on the new corpus, but then you can categorize new documents. Hopefully this is helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment