Skip to content

Instantly share code, notes, and snippets.

@VincentTatan
Last active May 22, 2020 00:47
Show Gist options
  • Save VincentTatan/a00f6602abc8c64ce39d8681143f6818 to your computer and use it in GitHub Desktop.
Save VincentTatan/a00f6602abc8c64ce39d8681143f6818 to your computer and use it in GitHub Desktop.
from tensorflow.keras.preprocessing.text import Tokenizer
sentences = [
'I eat chicken',
'I do not eat fish',
'Did you eat fish?'
]
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(word_index)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment