-
-
Save bhaettasch/d7f4e22e79df3c8b6c20 to your computer and use it in GitHub Desktop.
from gensim.models import Word2Vec | |
# Load pretrained model (since intermediate data is not included, the model cannot be refined with additional data) | |
model = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True, norm_only=True) | |
dog = model['dog'] | |
print(dog.shape) | |
print(dog[:10]) | |
# Deal with an out of dictionary word: Михаил (Michail) | |
if 'Михаил' in model: | |
print(model['Михаил'].shape) | |
else: | |
print('{0} is an out of dictionary word'.format('Михаил')) | |
# Some predefined functions that show content related information for given words | |
print(model.most_similar(positive=['woman', 'king'], negative=['man'])) | |
print(model.doesnt_match("breakfast cereal dinner lunch".split())) | |
print(model.similarity('woman', 'man')) |
Are you using 64 bit Python? Check out this guys blog post http://mccormickml.com/2016/04/12/googles-pretrained-word2vec-model-in-python/
Word2Vec model is deprecated. Use instead KeyedVectors
model = gensim.models.KeyedVectors.load_word2vec_format('./data/GoogleNews-vectors-negative300.bin.gz', binary=True)
@dineshsonachalam Does this only work with a certain version of python? When I use that on 2.7.14 I get an error saying 'No such file or directory', when I definitely have a 'data' directory. Error looks like this:
Traceback (most recent call last):
File "word_vectors.py", line 4, in <module>
model = KeyedVectors.load_word2vec_format('/data/GoogleNews-vectors-negative300.bin.gz', binary=True)
File "/Users/Olly/anaconda2/lib/python2.7/site-packages/gensim-3.2.0-py2.7-macosx-10.6-x86_64.egg/gensim/models/keyedvectors.py", line 195, in load_word2vec_format
with utils.smart_open(fname) as fin:
File "/Users/Olly/anaconda2/lib/python2.7/site-packages/smart_open-1.5.6-py2.7.egg/smart_open/smart_open_lib.py", line 176, in smart_open
return file_smart_open(parsed_uri.uri_path, mode, encoding=encoding, errors=errors)
File "/Users/Olly/anaconda2/lib/python2.7/site-packages/smart_open-1.5.6-py2.7.egg/smart_open/smart_open_lib.py", line 671, in file_smart_open
raw_fobj = open(fname, raw_mode)
IOError: [Errno 2] No such file or directory: '/data/GoogleNews-vectors-negative300.bin.gz'
@o-P-o, did you download the GoogleNews-vectors-negative300.bin.gz file? The KeyedVectors.load_word2vec_format() function reads the binary file directly from disk, so you'll need to download it first. Here's a link to the file.
model = api.load("word2vec-google-news-300") # download the model and return as object ready for use
word_vectors = model.wv #load the vectors from the model
@o-P-o, try removing the leading /
from the path you provide
I 've started my word2vec way a week ago. It seems like your code works. But I have a problem with loading a Google news model?, it is not enough RAM or something like that. Personally, for me, it takes all my RAM not able to load my model.
Could you help me to load the model? Because this is main stuff that disturbs me to go ahead.
The model occupies 4.5gb of my memory, so yeah it is quite intensive indeed.
when is execute the following
model = gensim.models.KeyedVectors.load_word2vec_format('./data/GoogleNews-vectors-negative300.bin.gz', binary=True)
i get an error saying
ValueError: could not broadcast input array from shape (75) into shape (300)
can anyone help me
Thanks In advance!!!!!!!!!!!!
when is execute the following
model = gensim.models.KeyedVectors.load_word2vec_format('./data/GoogleNews-vectors-negative300.bin.gz', binary=True)
i get an error saying
ValueError: could not broadcast input array from shape (75) into shape (300)can anyone help me
Thanks In advance!!!!!!!!!!!!
this is because the shape of google vectors is 300 while your vector shape is 75 , try to make your vector shape 300
UnicodeDecodeError Traceback (most recent call last)
in
8 #its 1.9GB in size.
9
---> 10 model = KeyedVectors.load_word2vec_format('/home/dinkar/AppliedAI_projects/GoogleNews-vectors-negative300.bin')
~/anaconda3/lib/python3.7/site-packages/gensim/models/keyedvectors.py in load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype)
1496 return _load_word2vec_format(
1497 cls, fname, fvocab=fvocab, binary=binary, encoding=encoding, unicode_errors=unicode_errors,
-> 1498 limit=limit, datatype=datatype)
1499
1500 def get_keras_embedding(self, train_embeddings=False):
~/anaconda3/lib/python3.7/site-packages/gensim/models/utils_any2vec.py in _load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype)
390 if line == b'':
391 raise EOFError("unexpected end of input; is count incorrect or file otherwise damaged?")
--> 392 parts = utils.to_unicode(line.rstrip(), encoding=encoding, errors=unicode_errors).split(" ")
393 if len(parts) != vector_size + 1:
394 raise ValueError("invalid vector on line %s (is this really the text format?)" % line_no)
~/anaconda3/lib/python3.7/site-packages/gensim/utils.py in any2unicode(text, encoding, errors)
357 if isinstance(text, unicode):
358 return text
--> 359 return unicode(text, encoding, errors=errors)
360
361
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x94 in position 7: invalid start byte
Someone Help me out to resolve the issue..Please
Someone Help me out to resolve the issue..Please
try this
t_model = KeyedVectors.load_word2vec_format('GoogleWord2Vec/GoogleNews-vectors-negative300.bin',binary=True)
it works for me
Download for here : https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz
And use .bin.gz
instead of .bin
wordmodelfile=r"GoogleNews-vectors-negative300.bin.gz.gz"
wordmodel= gensim.models.KeyedVectors.load_word2vec_format(wordmodelfile, binary=True)
But i am getting error:
ValueError: invalid literal for int() with base 10: 'version'
Please help me to resolve this error.
@vanshika1396 have you tried removing the extra .gz
from ...ative300.bin.gz.gz
?
@maifeeulasad have you tried removing the extra
.gz
from...ative300.bin.gz.gz
?
...ative300.bin.gz.gz
permission is not granted to change the name of the file.What else can be done, suggest?
@maifeeulasad have you tried removing the extra
.gz
from...ative300.bin.gz.gz
?
...ative300.bin.gz.gz
permission is not granted to change the name of the file.What else can be done, suggest?
If you are using windows, then open CMD
with admin privilege and navigate to that directory.
Now execute command rename GoogleNews-vectors-negative300.bin.gz.gz GoogleNews-vectors-negative300.bin.gz
If you are on linux, there is mv
and many more command
wv = gensim.models.KeyedVectors.load_word2vec_format("E:\GoogleNews-vectors-negative300.bin", binary=True)
wv.init_sims(replace=True)
i'm using this. but i want to less RAM. is it possible to load the .bin file part by part? for example in 3 steps? in order to decrease the RAM usage?
@mamad-knight yes, anything can be done using computer, except getting girlfriend and some abstract ....
Now, I have never done this, so I'm not sure, but
import numpy as np
filepath = "../input/embeddings/GoogleNews-vectors-negative300/GoogleNews-vectors-negative300.bin"
embeddings_index = {}
from gensim.models import KeyedVectors
wv_from_bin = KeyedVectors.load_word2vec_format(filepath, binary=True)
for word, vector in zip(wv_from_bin.vocab, wv_from_bin.vectors):
coefs = np.asarray(vector, dtype='float32')
embeddings_index[word] = coefs
Try to split the file and read like this.
I found the code here.
Rest you have to take care, maybe you can knock me, not sure if I can help or not...
Thanks for spending time, but i don't think it work. because the solution is not to load the whole .bin file. as soon as you load it, the RAM usage goes max.
try colab , not sure if they support such huge files @mamad-knight
Thanks man. i'll try it. @maifeeulasad
Try passing the url, instead of passing the location of the file..
you may even need to write your own stream receiver too..
good luck, if I'm done, i will share..
and if anyone finishes earlier, please share..
@mamad-knight
Thanks so much for spending time. i'll try it today or tomorrow. @maifeeulasad
Is there any API of this model so I can call it and get a vector for any word and without loading that model at my server? someone please help me with this, I am thinking this because my server can afford 4gb of ram for this model.
Thanks In advance.
Since this example this deprecated, I created a Google Colab demo for the same.
Reg "the model cannot be refined with additional data" - gensim has an online algorithm which allows to augment existing model (since 2015) https://rutumulkar.com/blog/2015/word2vec/
AttributeError: The vocab attribute was removed from KeyedVector in Gensim 4.0.0.
Use KeyedVector's .key_to_index dict, .index_to_key list, and methods .get_vecattr(key, attr) and .set_vecattr(key, attr, new_val) instead.
See https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4
Google Colab demo
Thank you for the Colab demo !!
I 've started my word2vec way a week ago. It seems like your code works. But I have a problem with loading a Google news model?, it is not enough RAM or something like that. Personally, for me, it takes all my RAM not able to load my model.
Could you help me to load the model? Because this is main stuff that disturbs me to go ahead.