Skip to content

Instantly share code, notes, and snippets.

@abaybektursun
Last active July 9, 2021 02:38
Show Gist options
  • Save abaybektursun/24b626f01a0575d3234299a11d366c60 to your computer and use it in GitHub Desktop.
Save abaybektursun/24b626f01a0575d3234299a11d366c60 to your computer and use it in GitHub Desktop.
Word sequence embeddings
def forward(sentence):
# Tokenize characters and words
word_ids = [vocab.word_to_id(w) for w in sentence.split()]
char_ids = [vocab.word_to_char_ids(w) for w in sentence.split()]
if sentence.find('<S>') != 0:
sentence = '<S> ' + sentence
for i in xrange(len(word_ids)):
inputs[0, 0] = word_ids[i]
char_ids_inputs[0, 0, :] = char_ids[i]
# Add 'lstm/lstm_0/control_dependency' if you want to dump previous layer
# LSTM.
lstm_emb = sess.run(t['lstm/lstm_1/control_dependency'],
feed_dict={t['char_inputs_in']: char_ids_inputs,
t['inputs_in']: inputs,
t['targets_in']: targets,
t['target_weights_in']: weights})
return lstm_emb
@misunderstood1027
Copy link

``

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment