Skip to content

Instantly share code, notes, and snippets.

b03902043 /
Last active Mar 15, 2021
Video captioning ( Seq2Seq in Keras )
from keras import backend as K
from keras.layers import TimeDistributed, Dense, LSTM
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).