Skip to content

Instantly share code, notes, and snippets.

@virattt
Last active June 16, 2024 08:00
Show Gist options
  • Save virattt/b140fb4bf549b6125d53aa153dc53be6 to your computer and use it in GitHub Desktop.
Save virattt/b140fb4bf549b6125d53aa153dc53be6 to your computer and use it in GitHub Desktop.
rag-reranking-gpt-colbert.ipynb
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@truebit
Copy link

truebit commented Jan 23, 2024

@Psancs05 thx

@virattt
Copy link
Author

virattt commented Jan 23, 2024

Great catch - updated 🙏

@Psancs05
Copy link

@virattt Do you know the difference between using:
query_embedding = model(**query_encoding).last_hidden_state.squeeze(0)
query_embedding = model(**query_encoding).last_hidden_state.mean(dim=1)

I have tested both and seems that the squeeze(0) returns better quality similar documents (maybe it's just the use-case I tried)

@TripleExclam
Copy link

query_embedding = model(**query_encoding).last_hidden_state.squeeze(0) is correct since it returns a vector per token, whilst
query_embedding = model(**query_encoding).last_hidden_state.mean(dim=1) returns a single vector averaged over all tokens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment