Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bhavbhavyaa/e224ac4306a51cc7d14407b40e07582a to your computer and use it in GitHub Desktop.
Save bhavbhavyaa/e224ac4306a51cc7d14407b40e07582a to your computer and use it in GitHub Desktop.
def generate_text_seq(model,tokenizer,text_seq_length,seed_text,n_words):
text = []
for _ in range(n_words):
encoded = tokenizer.texts_to_sequences([seed_text])[0]
encoded = pad_sequences([encoded],maxlen = text_seq_length,truncating = 'pre')
y_predict = model.predict_classes(encoded)
predicted_words = " "
for word,index in tokenizer.word_index.items():
if index == y_predict:
predicted_word = word
break
seed_text = seed_text + " " + predicted_word
text.append(predicted_word)
return " ".join(text)
generate_text_seq(model,tokenizer,50,seed_text,100)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment