-
-
Save nialloriordan/4deec5ad99613f02201b65f26d66cf48 to your computer and use it in GitHub Desktop.
Holds the code for https://towardsdatascience.com/build-a-bert-sci-kit-transformer-59d60ddd54a5
You might also be interested in zero shot classification if you don't want to fine-tune your embeddings.
Thanks for the tip. We're looking into that. But, am I correct to deduce from your replies that you weren't able to get any useful results from this code either?
Thanks for the tip. We're looking into that. But, am I correct to deduce from your replies that you weren't able to get any useful results from this code either?
For my specific use case using embeddings as features were very valuable. This code example shows the simplest method of extracting embeddings as features but you might be able to extract more value from the embeddings by:
- using another Transformer model other than bert e.g XLNet, RoBERTa, T5 etc.
- explore alternative methods for extracting embeddings (
embedding_func
) rather than only using the last layer. More information in this Github issue discussion - fine-tuning your model before creating embeddings as features
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You might also be interested in zero shot classification if you don't want to fine-tune your embeddings.