Skip to content

Instantly share code, notes, and snippets.

@amankharwal
Created October 5, 2020 03:52
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Embed
What would you like to do?
def generate_text(seed_text, next_words, model, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding=’pre’)
predicted = model.predict_classes(token_list, verbose=0)
output_word = “”
for word,index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += “ “+output_word
return seed_text.title()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment