Skip to content

Instantly share code, notes, and snippets.

View akshayuppal3's full-sized avatar
🎯
Focusing

Akshay Uppal akshayuppal3

🎯
Focusing
View GitHub Profile
from tensorflow.keras.layers import (
Attention,
Bidirectional,
Concatenate,
Dropout,
Embedding,
Dense,
GRU,
TimeDistributed,
Input
def get_attention_model(vocab_size, embedding_matrix, embed_size=100):
word_input = Input(shape=(MAX_WORD_LEN,), dtype='int32', name='word_input')
word_sequence = Embedding(
vocab_size,
embed_size,
input_length=MAX_WORD_LEN, # length of sentences in doc
weights=[embedding_matrix],
trainable=False)(word_input)
## attention at words