Skip to content

Instantly share code, notes, and snippets.

@ChunML
Last active April 29, 2019 08:14
Show Gist options
  • Save ChunML/32fbf15c7f8488e62a206eaa932612f2 to your computer and use it in GitHub Desktop.
Save ChunML/32fbf15c7f8488e62a206eaa932612f2 to your computer and use it in GitHub Desktop.
def call(self, sequence, padding_mask):
# padding_mask will have the same shape as the input sequence
# padding_mask will be used in the Decoder too
# so we need to create it outside the Encoder
embed_out = self.embedding(sequence)
embed_out += pes[:sequence.shape[1], :]
sub_in = embed_out
for i in range(self.num_layers):
sub_out = self.attention[i](sub_in, sub_in, padding_mask)
sub_out = sub_in + sub_out
sub_out = self.attention_norm[i](sub_out)
ffn_in = sub_out
ffn_out = self.dense_2[i](self.dense_1[i](ffn_in))
ffn_out = ffn_in + ffn_out
ffn_out = self.ffn_norm[i](ffn_out)
sub_in = ffn_out
return ffn_out
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment