Skip to content

Instantly share code, notes, and snippets.

@grohith327
Created May 20, 2018 20:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save grohith327/fba4686f8d2f4f5545704655b816cad4 to your computer and use it in GitHub Desktop.
Save grohith327/fba4686f8d2f4f5545704655b816cad4 to your computer and use it in GitHub Desktop.
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=2500,split=' ')
tokenizer.fit_on_texts(x)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment