Skip to content

Instantly share code, notes, and snippets.

@aravindpai aravindpai/10_4.py
Created Mar 12, 2020

Embed
What would you like to do?
# load the whole embedding into memory
embeddings_index = dict()
f = open('../input/glove6b/glove.6B.300d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Loaded %s word vectors.' % len(embeddings_index))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.