This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
i = np.random.choice(len(input_sentences)) | |
input_seq = encoder_input_sequences[i:i+1] | |
translation = translate_sentence(input_seq) | |
print('Input Language : ', input_sentences[i]) | |
print('Actual translation : ', output_sentences[i]) | |
print('French translation : ', translation) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def translate_sentence(input_seq): | |
states_value = encoder_model.predict(input_seq) | |
target_seq = np.zeros((1, 1)) | |
target_seq[0, 0] = word2idx_outputs['<sos>'] | |
eos = word2idx_outputs['<eos>'] | |
output_sentence = [] | |
for _ in range(max_out_len): | |
output_tokens, h, c = decoder_model.predict([target_seq] + states_value) | |
idx = np.argmax(output_tokens[0, 0, :]) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
idx2word_input = {v:k for k, v in word2idx_inputs.items()} | |
idx2word_target = {v:k for k, v in word2idx_outputs.items()} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
decoder_state_input_h = Input(shape=(LSTM_NODES,)) | |
decoder_state_input_c = Input(shape=(LSTM_NODES,)) | |
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] | |
decoder_inputs_single = Input(shape=(1,)) | |
decoder_inputs_single_x = decoder_embedding(decoder_inputs_single) | |
decoder_outputs, h, c = decoder_lstm(decoder_inputs_single_x, initial_state=decoder_states_inputs) | |
decoder_states = [h, c] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
encoder_model = Model(encoder_inputs, encoder_states) | |
model.compile(optimizer='rmsprop', loss='categorical_crossentropy') | |
model.load_weights('seq2seq_eng-fra.h5') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1) | |
history = model.fit([encoder_input_sequences, decoder_input_sequences], decoder_targets_one_hot, | |
batch_size=BATCH_SIZE, | |
epochs=20, | |
callbacks=[es], | |
validation_split=0.1, | |
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#Compile | |
model = Model([encoder_inputs,decoder_inputs], decoder_outputs) | |
model.compile( | |
optimizer='rmsprop', | |
loss='categorical_crossentropy', | |
metrics=['accuracy'] | |
) | |
model.summary() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
decoder_inputs = Input(shape=(max_out_len,)) | |
decoder_embedding = Embedding(num_words_output, LSTM_NODES) | |
decoder_inputs_x = decoder_embedding(decoder_inputs) | |
decoder_lstm = LSTM(LSTM_NODES, return_sequences=True, return_state=True) | |
decoder_outputs, _, _ = decoder_lstm(decoder_inputs_x, initial_state=encoder_states) | |
#Finally, the output from the decoder LSTM is passed through a dense layer to predict decoder outputs. | |
decoder_dense = Dense(num_words_output, activation='softmax') | |
decoder_outputs = decoder_dense(decoder_outputs) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
encoder_inputs = Input(shape=(max_input_len,)) | |
x = embedding_layer(encoder_inputs) | |
encoder = LSTM(LSTM_NODES, return_state=True) | |
encoder_outputs, h, c = encoder(x) | |
encoder_states = [h, c] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
embedding_layer = Embedding(num_words, EMBEDDING_SIZE, weights=[embedding_matrix], input_length=max_input_len) |
NewerOlder