Skip to content

Instantly share code, notes, and snippets.

@vijayanandrp
Created December 19, 2017 18:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save vijayanandrp/aced045852fe54e24aa240994062a392 to your computer and use it in GitHub Desktop.
Save vijayanandrp/aced045852fe54e24aa240994062a392 to your computer and use it in GitHub Desktop.

Tutorial Exercise: Yelp reviews (Solution)

Introduction

This exercise uses a small subset of the data from Kaggle's Yelp Business Rating Prediction competition.

Description of the data:

  • yelp.csv contains the dataset. It is stored in the repository (in the data directory), so there is no need to download anything from the Kaggle website.
  • Each observation (row) in this dataset is a review of a particular business by a particular user.
  • The stars column is the number of stars (1 through 5) assigned by the reviewer to the business. (Higher stars is better.) In other words, it is the rating of the business by the person who wrote the review.
  • The text column is the text of the review.

Goal: Predict the star rating of a review using only the review text.

Tip: After each task, I recommend that you check the shape and the contents of your objects, to confirm that they match your expectations.

# for Python 2: use print only as a function
from __future__ import print_function

Task 1

Read yelp.csv into a pandas DataFrame and examine it.

# read yelp.csv using a relative path
import pandas as pd
path = 'data/yelp.csv'
yelp = pd.read_csv(path)
# examine the shape
yelp.shape
(10000, 10)
# examine the first row
yelp.head(1)
business_id date review_id stars text type user_id cool useful funny
0 9yKzy9PApeiPPOUJEtnvkg 2011-01-26 fWKvX83p0-ka4JS3dc6E5A 5 My wife took me here on my birthday for breakf... review rLtl8ZkDX5vH5nAx9C3q5Q 2 5 0
# examine the class distribution
yelp.stars.value_counts().sort_index()
1     749
2     927
3    1461
4    3526
5    3337
Name: stars, dtype: int64

Task 2

Create a new DataFrame that only contains the 5-star and 1-star reviews.

# filter the DataFrame using an OR condition
yelp_best_worst = yelp[(yelp.stars==5) | (yelp.stars==1)]

# equivalently, use the 'loc' method
yelp_best_worst = yelp.loc[(yelp.stars==5) | (yelp.stars==1), :]
# examine the shape
yelp_best_worst.shape
(4086, 10)

Task 3

Define X and y from the new DataFrame, and then split X and y into training and testing sets, using the review text as the only feature and the star rating as the response.

  • Hint: Keep in mind that X should be a pandas Series (not a DataFrame), since we will pass it to CountVectorizer in the task that follows.
# define X and y
X = yelp_best_worst.text
y = yelp_best_worst.stars
# split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# examine the object shapes
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
(3064L,)
(1022L,)
(3064L,)
(1022L,)

Task 4

Use CountVectorizer to create document-term matrices from X_train and X_test.

# import and instantiate CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
# fit and transform X_train into X_train_dtm
X_train_dtm = vect.fit_transform(X_train)
X_train_dtm.shape
(3064, 16825)
# transform X_test into X_test_dtm
X_test_dtm = vect.transform(X_test)
X_test_dtm.shape
(1022, 16825)

Task 5

Use multinomial Naive Bayes to predict the star rating for the reviews in the testing set, and then calculate the accuracy and print the confusion matrix.

# import and instantiate MultinomialNB
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
# train the model using X_train_dtm
nb.fit(X_train_dtm, y_train)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
# make class predictions for X_test_dtm
y_pred_class = nb.predict(X_test_dtm)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
0.91878669275929548
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
array([[126,  58],
       [ 25, 813]])

Task 6 (Challenge)

Calculate the null accuracy, which is the classification accuracy that could be achieved by always predicting the most frequent class.

  • Hint: Evaluating a classification model explains null accuracy and demonstrates two ways to calculate it, though only one of those ways will work in this case. Alternatively, you can come up with your own method to calculate null accuracy!
# examine the class distribution of the testing set
y_test.value_counts()
5    838
1    184
Name: stars, dtype: int64
# calculate null accuracy
y_test.value_counts().head(1) / y_test.shape
5    0.819961
Name: stars, dtype: float64
# calculate null accuracy manually
838 / float(838 + 184)
0.8199608610567515

Task 7 (Challenge)

Browse through the review text of some of the false positives and false negatives. Based on your knowledge of how Naive Bayes works, do you have any ideas about why the model is incorrectly classifying these reviews?

  • Hint: Evaluating a classification model explains the definitions of "false positives" and "false negatives".
  • Hint: Think about what a false positive means in this context, and what a false negative means in this context. What has scikit-learn defined as the "positive class"?
# first 10 false positives (1-star reviews incorrectly classified as 5-star reviews)
X_test[y_test < y_pred_class].head(10)
2175    This has to be the worst restaurant in terms o...
1781    If you like the stuck up Scottsdale vibe this ...
2674    I'm sorry to be what seems to be the lone one ...
9984    Went last night to Whore Foods to get basics t...
3392    I found Lisa G's while driving through phoenix...
8283    Don't know where I should start. Grand opening...
2765    Went last week, and ordered a dozen variety. I...
2839    Never Again,\r\nI brought my Mountain Bike in ...
321     My wife and I live around the corner, hadn't e...
1919                                         D-scust-ing.
Name: text, dtype: object
# false positive: model is reacting to the words "good", "impressive", "nice"
X_test[1781]
"If you like the stuck up Scottsdale vibe this is a good place for you. The food isn't impressive. Nice outdoor seating."
# false positive: model does not have enough data to work with
X_test[1919]
'D-scust-ing.'
# first 10 false negatives (5-star reviews incorrectly classified as 1-star reviews)
X_test[y_test > y_pred_class].head(10)
7148    I now consider myself an Arizonian. If you dri...
4963    This is by far my favourite department store, ...
6318    Since I have ranted recently on poor customer ...
380     This is a must try for any Mani Pedi fan. I us...
5565    I`ve had work done by this shop a few times th...
3448    I was there last week with my sisters and whil...
6050    I went to sears today to check on a layaway th...
2504    I've passed by prestige nails in walmart 100s ...
2475    This place is so great! I am a nanny and had t...
241     I was sad to come back to lai lai's and they n...
Name: text, dtype: object
# false negative: model is reacting to the words "complain", "crowds", "rushing", "pricey", "scum"
X_test[4963]
'This is by far my favourite department store, hands down. I have had nothing but perfect experiences in this store, without exception, no matter what department I\'m in. The shoe SA\'s will bend over backwards to help you find a specific shoe, and the staff will even go so far as to send out hand-written thank you cards to your home address after you make a purchase - big or small. Tim & Anthony in the shoe salon are fabulous beyond words! \r\n\r\nI am not completely sure that I understand why people complain about the amount of merchandise on the floor or the lack of crowds in this store. Frankly, I would rather not be bombarded with merchandise and other people. One of the things I love the most about Barney\'s is not only the prompt attention of SA\'s, but the fact that they aren\'t rushing around trying to help 35 people at once. The SA\'s at Barney\'s are incredibly friendly and will stop to have an actual conversation, regardless or whether you are purchasing something or not. I have also never experienced a "high pressure" sale situation here.\r\n\r\nAll in all, Barneys is pricey, and there is no getting around it. But, um, so is Neiman\'s and that place is a crock. Anywhere that ONLY accepts American Express or their charge card and then treats you like scum if you aren\'t carrying neither is no place that I want to spend my hard earned dollars. Yay Barneys!'

Task 8 (Challenge)

Calculate which 10 tokens are the most predictive of 5-star reviews, and which 10 tokens are the most predictive of 1-star reviews.

  • Hint: Naive Bayes automatically counts the number of times each token appears in each class, as well as the number of observations in each class. You can access these counts via the feature_count_ and class_count_ attributes of the Naive Bayes model object.
# store the vocabulary of X_train
X_train_tokens = vect.get_feature_names()
len(X_train_tokens)
16825
# first row is one-star reviews, second row is five-star reviews
nb.feature_count_.shape
(2L, 16825L)
# store the number of times each token appears across each class
one_star_token_count = nb.feature_count_[0, :]
five_star_token_count = nb.feature_count_[1, :]
# create a DataFrame of tokens with their separate one-star and five-star counts
tokens = pd.DataFrame({'token':X_train_tokens, 'one_star':one_star_token_count, 'five_star':five_star_token_count}).set_index('token')
# add 1 to one-star and five-star counts to avoid dividing by 0
tokens['one_star'] = tokens.one_star + 1
tokens['five_star'] = tokens.five_star + 1
# first number is one-star reviews, second number is five-star reviews
nb.class_count_
array([  565.,  2499.])
# convert the one-star and five-star counts into frequencies
tokens['one_star'] = tokens.one_star / nb.class_count_[0]
tokens['five_star'] = tokens.five_star / nb.class_count_[1]
# calculate the ratio of five-star to one-star for each token
tokens['five_star_ratio'] = tokens.five_star / tokens.one_star
# sort the DataFrame by five_star_ratio (descending order), and examine the first 10 rows
# note: use sort() instead of sort_values() for pandas 0.16.2 and earlier
tokens.sort_values('five_star_ratio', ascending=False).head(10)
five_star one_star five_star_ratio
token
fantastic 0.077231 0.003540 21.817727
perfect 0.098039 0.005310 18.464052
yum 0.024810 0.001770 14.017607
favorite 0.138055 0.012389 11.143029
outstanding 0.019608 0.001770 11.078431
brunch 0.016807 0.001770 9.495798
gem 0.016006 0.001770 9.043617
mozzarella 0.015606 0.001770 8.817527
pasty 0.015606 0.001770 8.817527
amazing 0.185274 0.021239 8.723323
# sort the DataFrame by five_star_ratio (ascending order), and examine the first 10 rows
tokens.sort_values('five_star_ratio', ascending=True).head(10)
five_star one_star five_star_ratio
token
staffperson 0.0004 0.030088 0.013299
refused 0.0004 0.024779 0.016149
disgusting 0.0008 0.042478 0.018841
filthy 0.0004 0.019469 0.020554
unprofessional 0.0004 0.015929 0.025121
unacceptable 0.0004 0.015929 0.025121
acknowledge 0.0004 0.015929 0.025121
ugh 0.0008 0.030088 0.026599
fuse 0.0004 0.014159 0.028261
boca 0.0004 0.014159 0.028261

Task 9 (Challenge)

Up to this point, we have framed this as a binary classification problem by only considering the 5-star and 1-star reviews. Now, let's repeat the model building process using all reviews, which makes this a 5-class classification problem.

Here are the steps:

  • Define X and y using the original DataFrame. (y should contain 5 different classes.)
  • Split X and y into training and testing sets.
  • Create document-term matrices using CountVectorizer.
  • Calculate the testing accuracy of a Multinomial Naive Bayes model.
  • Compare the testing accuracy with the null accuracy, and comment on the results.
  • Print the confusion matrix, and comment on the results. (This Stack Overflow answer explains how to read a multi-class confusion matrix.)
  • Print the classification report, and comment on the results. If you are unfamiliar with the terminology it uses, research the terms, and then try to figure out how to calculate these metrics manually from the confusion matrix!
# define X and y using the original DataFrame
X = yelp.text
y = yelp.stars
# check that y contains 5 different classes
y.value_counts().sort_index()
1     749
2     927
3    1461
4    3526
5    3337
Name: stars, dtype: int64
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# create document-term matrices using CountVectorizer
X_train_dtm = vect.fit_transform(X_train)
X_test_dtm = vect.transform(X_test)
# fit a Multinomial Naive Bayes model
nb.fit(X_train_dtm, y_train)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
# make class predictions
y_pred_class = nb.predict(X_test_dtm)
# calculate the accuary
metrics.accuracy_score(y_test, y_pred_class)
0.47120000000000001
# calculate the null accuracy
y_test.value_counts().head(1) / y_test.shape
4    0.3536
Name: stars, dtype: float64

Accuracy comments: At first glance, 47% accuracy does not seem very good, given that it is not much higher than the null accuracy. However, I would consider the 47% accuracy to be quite impressive, given that humans would also have a hard time precisely identifying the star rating for many of these reviews.

# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
array([[ 55,  14,  24,  65,  27],
       [ 28,  16,  41, 122,  27],
       [  5,   7,  35, 281,  37],
       [  7,   0,  16, 629, 232],
       [  6,   4,   6, 373, 443]])

Confusion matrix comments:

  • Nearly all 4-star and 5-star reviews are classified as 4 or 5 stars, but they are hard for the model to distinguish between.
  • 1-star, 2-star, and 3-star reviews are most commonly classified as 4 stars, probably because it's the predominant class in the training data.
# print the classification report
print(metrics.classification_report(y_test, y_pred_class))
             precision    recall  f1-score   support

          1       0.54      0.30      0.38       185
          2       0.39      0.07      0.12       234
          3       0.29      0.10      0.14       365
          4       0.43      0.71      0.53       884
          5       0.58      0.53      0.55       832

avg / total       0.46      0.47      0.43      2500

Precision answers the question: "When a given class is predicted, how often are those predictions correct?" To calculate the precision for class 1, for example, you divide 55 by the sum of the first column of the confusion matrix.

# manually calculate the precision for class 1
precision = 55 / float(55 + 28 + 5 + 7 + 6)
print(precision)
0.544554455446

Recall answers the question: "When a given class is the true class, how often is that class predicted?" To calculate the recall for class 1, for example, you divide 55 by the sum of the first row of the confusion matrix.

# manually calculate the recall for class 1
recall = 55 / float(55 + 14 + 24 + 65 + 27)
print(recall)
0.297297297297

F1 score is a weighted average of precision and recall.

# manually calculate the F1 score for class 1
f1 = 2 * (precision * recall) / (precision + recall)
print(f1)
0.384615384615

Support answers the question: "How many observations exist for which a given class is the true class?" To calculate the support for class 1, for example, you sum the first row of the confusion matrix.

# manually calculate the support for class 1
support = 55 + 14 + 24 + 65 + 27
print(support)
185

Classification report comments:

  • Class 1 has low recall, meaning that the model has a hard time detecting the 1-star reviews, but high precision, meaning that when the model predicts a review is 1-star, it's usually correct.
  • Class 5 has high recall and precision, probably because 5-star reviews have polarized language, and because the model has a lot of observations to learn from.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment