Skip to content

Instantly share code, notes, and snippets.

@ChunML
Last active May 6, 2019 05:26
Show Gist options
  • Save ChunML/a8574d6d5df3c64837c17dd08a2e43c0 to your computer and use it in GitHub Desktop.
Save ChunML/a8574d6d5df3c64837c17dd08a2e43c0 to your computer and use it in GitHub Desktop.
H = 2
NUM_LAYERS = 2
en_vocab_size = len(en_tokenizer.word_index) + 1
encoder = Encoder(en_vocab_size, MODEL_SIZE, NUM_LAYERS, H)
en_sequence_in = tf.constant([[1, 2, 3, 4, 6, 7, 8, 0, 0, 0],
[1, 2, 3, 4, 6, 7, 8, 0, 0, 0]])
encoder_output = encoder(en_sequence_in)
print('Input vocabulary size', en_vocab_size)
print('Encoder input shape', en_sequence_in.shape)
print('Encoder output shape', encoder_output.shape)
fr_vocab_size = len(fr_tokenizer.word_index) + 1
max_len_fr = data_fr_in.shape[1]
decoder = Decoder(fr_vocab_size, MODEL_SIZE, NUM_LAYERS, H)
fr_sequence_in = tf.constant([[1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0, 0, 0, 0],
[1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0, 0, 0, 0]])
decoder_output = decoder(fr_sequence_in, encoder_output)
print('Target vocabulary size', fr_vocab_size)
print('Decoder input shape', fr_sequence_in.shape)
print('Decoder output shape', decoder_output.shape)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment