Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('bert-base-nli-mean-tokens')
sentence_embeddings = model.encode(sentences)
#print('Sample BERT embedding vector - length', len(sentence_embeddings[0]))
#print('Sample BERT embedding vector - note includes negative values', sentence_embeddings[0])
query = "I had pizza and pasta"
query_vec = model.encode([query])[0]
for sent in sentences:
sim = cosine(query_vec, model.encode([sent])[0])
print("Sentence = ", sent, "; similarity = ", sim)
@AsharParacha

This comment has been minimized.

Copy link

@AsharParacha AsharParacha commented Feb 5, 2021

Hi dear. .encode function is not working.
Error is : ----> 'Doc2Vec' object has no attribute 'encode'

@ganbaaelmer

This comment has been minimized.

Copy link

@ganbaaelmer ganbaaelmer commented Mar 22, 2021

NameError: name 'cosine' is not defined

@ganbaaelmer

This comment has been minimized.

Copy link

@ganbaaelmer ganbaaelmer commented Mar 22, 2021

Correct one is:

from scipy.spatial import distance

for sent in sentences:
sim = distance.cosine(query_vec, model.encode([sent])[0])
print("Sentence = ", sent, "; similarity (close is good) = ", sim)

@ganbaaelmer

This comment has been minimized.

Copy link

@ganbaaelmer ganbaaelmer commented Mar 22, 2021

from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('bert-base-nli-mean-tokens')

to

model = SentenceTransformer('bert-base-nli-mean-tokens')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment