Skip to content

Instantly share code, notes, and snippets.

@rishisidhu
Created August 25, 2020 05:09
Show Gist options
  • Save rishisidhu/e1019d428d6d4083f1ac52f1865c780f to your computer and use it in GitHub Desktop.
Save rishisidhu/e1019d428d6d4083f1ac52f1865c780f to your computer and use it in GitHub Desktop.
Getting word index from tokenizer
from tensorflow.keras.preprocessing.text import Tokenizer
#Let's add custom sentences
sentences = [
"One plus one is two!",
"Two plus two is four!"
]
#Tokenize the sentences
myTokenizer = Tokenizer(num_words=10)
myTokenizer.fit_on_texts(sentences)
print(myTokenizer.word_index)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment