Skip to content

Instantly share code, notes, and snippets.

@abhishek-shrm
Last active September 22, 2020 13:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save abhishek-shrm/6343bda71751065c228ef6f4adff0bc1 to your computer and use it in GitHub Desktop.
Save abhishek-shrm/6343bda71751065c228ef6f4adff0bc1 to your computer and use it in GitHub Desktop.
from keras.preprocessing.text import Tokenizer
# Instantiating Tokenizer
tokenizer = Tokenizer()
# Creating index for words
tokenizer.fit_on_texts(df_train['cleaned'])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment