Skip to content

Instantly share code, notes, and snippets.

@purva91
Last active March 22, 2021 07:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save purva91/1775c4d702f0cbd5cddce4717f6aa244 to your computer and use it in GitHub Desktop.
Save purva91/1775c4d702f0cbd5cddce4717f6aa244 to your computer and use it in GitHub Desktop.
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('bert-base-nli-mean-tokens')
sentence_embeddings = model.encode(sentences)
#print('Sample BERT embedding vector - length', len(sentence_embeddings[0]))
#print('Sample BERT embedding vector - note includes negative values', sentence_embeddings[0])
query = "I had pizza and pasta"
query_vec = model.encode([query])[0]
for sent in sentences:
sim = cosine(query_vec, model.encode([sent])[0])
print("Sentence = ", sent, "; similarity = ", sim)
@ganbaaelmer
Copy link

from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('bert-base-nli-mean-tokens')

to

model = SentenceTransformer('bert-base-nli-mean-tokens')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment