Skip to content

Instantly share code, notes, and snippets.

Heads down on deep learning ...


Heads down on deep learning ...
View GitHub Profile
iridiumblue /
Last active Oct 5, 2020 — forked from cbaziotis/
Keras Layer that implements an Attention mechanism, compatible with Eager Execution and TF 1.13. Supports Masking. Follows the work of Yang et al. [] "Hierarchical Attention Networks for Document Classification"
# AttentionWithContext adapted for Tensorflow 1.13 with Eager Execution.
# IMPORTANT -you can't use regular keras optimizers. You need to grab one that is subclassed from
# tf.train.Optimizer. Not to worry, your favorite is probably there, for example -
# That's it, now you can use this layer -
# Adapted from
# Tested using functional API. Just plop on top of an RNN, like so -