Skip to content

Instantly share code, notes, and snippets.



Created Mar 12, 2020
What would you like to do?
# load the whole embedding into memory
embeddings_index = dict()
f = open('../input/glove6b/glove.6B.300d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print('Loaded %s word vectors.' % len(embeddings_index))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment