Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(train_articles)
word_index = tokenizer.word_index
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment