Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@aravindpai
Last active November 27, 2020 18:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save aravindpai/e3963f429622312485728eb63e0f7655 to your computer and use it in GitHub Desktop.
Save aravindpai/e3963f429622312485728eb63e0f7655 to your computer and use it in GitHub Desktop.
def decode_sequence(input_seq):
# Encode the input as state vectors.
e_out, e_h, e_c = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
# Chose the 'start' word as the first word of the target sequence
target_seq[0, 0] = target_word_index['start']
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + [e_out, e_h, e_c])
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_token = reverse_target_word_index[sampled_token_index]
if(sampled_token!='end'):
decoded_sentence += ' '+sampled_token
# Exit condition: either hit max length or find stop word.
if (sampled_token == 'end' or len(decoded_sentence.split()) >= (max_len_summary-1)):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1,1))
target_seq[0, 0] = sampled_token_index
# Update internal states
e_h, e_c = h, c
return decoded_sentence
@feperessim
Copy link

The first condition of line 24 does not make sense, since it will only reach that part whether sampled_token != 'end', however that if tests for sampled_token == 'end'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment