Skip to content

Instantly share code, notes, and snippets.

@hamelsmu
Last active April 4, 2024 01:22
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save hamelsmu/00855b345321ff4e3f53d12c3305b8ad to your computer and use it in GitHub Desktop.
Save hamelsmu/00855b345321ff4e3f53d12c3305b8ad to your computer and use it in GitHub Desktop.
My thoughts re: Is fine tuning still valuable?

Here is my personal opinion about the questions I posed in this tweet:


I think that fine-tuning is still very valuable in many situations. I’ve done some more digging and I find that people who say that fine-tuning isn't useful are indeed often working on products where fine-tuning isn't likely to be useful:

  • They are making developer tools - foundation models have been trained extensively on coding tasks.
  • They are building foundation models and testing for the most general cases. But the foundation models themselves are also being trained for the most general cases.
  • They are building a personal assistant that isn’t scoped to any type of domain or use case and is essentially similar to the same folks building foundation models.

Another common pattern is that people often say this in earlier stages of their product development. One sign that folks are in really early stages is that they don’t have a domain-specific eval harness.

It’s impossible to fine-tune effectively without an eval system which can lead to writing off fine-tuning if you haven't completed this prerequisite. It's impossible to improve your product without a good eval system in the long term, fine-tuning or not.

You should do as much prompt engineering as possible before you fine-tune. But not for reasons you would think! The reason for doing lots of prompt engineering is that it's a great way to stress test your eval system!

If you find that prompt-engineering works fine (and you are systematically evaluating your product) then it's fine to stop there. I'm a big believer in using the simplest approach to solving a problem. I just don't think you should write off fine-tuning just yet.

Examples where I've seen fine-tuning work well

Generally speaking, fine-tuning works best to learn syntax, style and rules whereas techniques like RAG work best to supply the model with context or up-to-date facts.

These are some examples from companies I've worked with. Hopefully, we will be able to share more details soon.

  • Honeycomb's Natural Language Query Assistant - previously, the "programming manual" for the Honeycomb query language was being dumped into the prompt along with many examples. While this was OK, fine-tuning worked much better to allow the model to learn the syntax and rules of this niche domain-specific language.

  • ReChat's Lucy - this is an AI real estate assistant integrated into an existing Real Estate CRM system. ReChat needs LLM responses to be provided in a very idiosyncratic format that weaves together structured and unstructured data to allow the front end to render widgets, cards and other interactive elements dynamically into the chat interface. Fine-tuning was the key to making this work correctly. This talk has more details.

P.S. Fine-tuning is not only limited to open or "small" models. There are lots of folks who have been fine-tuning GPT-3.5, such as Perplexity.AI: and CaseText, to name a few.

@natolambert
Copy link

The number of people using LLMs is growing much faster than the number who should fine-tune. This is a good summary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment