Skip to content

Instantly share code, notes, and snippets.

@mlaprise
Created April 24, 2023 01:45
Show Gist options
  • Save mlaprise/bf4745655194162babfc2d158162e2e0 to your computer and use it in GitHub Desktop.
Save mlaprise/bf4745655194162babfc2d158162e2e0 to your computer and use it in GitHub Desktop.

Training open-source LLMs on ChatGPT output is a really bad idea.

Everyone is now racing to create open-source alternatives to compete with GPT3.5/GPT4. A common shortcut used by some teams to bootstrap their effort is to fine-tune their model on ChatGPT output. I used to think it was a good idea and totally fair play to do this. Actually, I still think it’s fair play. OpenAI effectively distilled the entire web into its models. They are saying themself that they are using publicly accessible information (mostly). So distilling their model is, in effect, distilling the public open web, so small Term of Service details aside, I don’t see major ethical problems with that. Right? Well, it’s not entirely true and I realized now that, even when ignoring the ethical considerations, using their output is a really bad idea.

First of all, from a purely technical point of view, as @yoavgo is explaining it beautifully in his recent post, there is no way to align LLMs correctly without the RLHF component. I encourage you to read his post which does a really good job of explaining the contribution of the RLHF part.

https://twitter.com/yoavgo/status/1649720816769138689?s=20

So, in a sense, saying that OpenAI LLMs are only distilling the web is not entirely true since they provided a good amount of RLHF feedback during training, and that’s precisely the problem.

I'm afraid I can't do that Dave

You see, the second argument is not really technical or even ethical, it’s more of a aesthetics and political argument. The observation strokes me lately as I noticed the explosion of Midjourney & Stable Diffusion images all over the web. They are now everywhere, they are used for blog posts book illustrations, YouTube thumbnails, ads, etc. It's our modern version of the shitty Stock photos used all over the place during the last 20 years. Hopefully, they are much better. It’s the same thing for ChatGPT, the only difference is that it’s harder to notice. Unlike generated images, it’s quite hard to detect text generated by ChatGPT. But just sampling my immediate network, my guess is that ChatGPT-created content is growing REALLY FAST. People are using it for all sorts of things and at scale: ads, letters, redaction assistant, translation, summarization, email redaction, etc. For christ sake, Microsoft even did a demo of Office Co-Pilot showing a mother writing a letter to his daughter using Office CoPilot! So I can easily imagine a near future where the web will be flooded by LLM output or at least by content heavily inspired or edited by LLMs.

If all those content would come from a single model… well that would be quite a dystopian future. I think we can all agree now that for better or worst, OpenAI models are quite opinionated. You don’t have to try that hard to hit the “i’m-afraid-i-can’t-do-that-dave” limit of ChatGPT. There are certain things that the model will refuse to say, it’s that simple. I used GPT4 quite a bit to translate text between French and English, it’s working quite well. But you know what’s the difference between let’s say Google Translate and GPT4? If the OpenAI annotation team deems your content “unsafe” the model will simply refuse to translate it! We are there. And you know what… that’s totally fine.

Language and thought are closely intertwined.

OpenAI is perfectly in its right to tweak the model whatever they want, it’s a commercial product after all and they need to make sure things don’t go out of control. But if it’s the only model out there it’s highly problematic. Language is not a simple means of communication used to broadcast the thought that we build in our brains using some kind of hidden process. Language is actually part of the thinking process. As cognitive psychologists like @sapinker pointed out, language and thought are not identical, but they are closely intertwined.

Knowing that, it’s already slightly disturbing to think about the long-term consequence of people delegating a part of their thought process to “something”, now imagine that they are delegating this process to a centralized model heavily aligned by a really small group of curators, now THAT’s disturbing. But hopefully, I don’t think it will happen. The solution for this is simply to build MORE models not less. The more independent groups are building & training models on various architecture, datasets and alignment rules the better job will do at preserving language & thought diversity. Obviously, training those open-source models on the output of ChatGPT won’t achieve that. Don’t get me wrong, I think fine-tuned models like GPT4All, Alpaca & Vicuna are really interesting and I’m glad they exist, but those are essentially minions of GPT4.

More LLMs not less

I would say it’s pretty urgent to build those true alternatives (like Open Assistant, StableLM, etc). OpenAI models are hugely popular and their output is already leaking all over the web pretty fast. Soon enough public datasets like CommonCrawl and C4 will contain a good chunk of it and will be harder and harder to disentangle LLM-generated and true organic content.

Steve Jobs famously said that computers are like bicycles for the mind. If well executed, LLMs will do the exact same thing… we just need to make sure all the bicycles are not programmed to drive all of us to the same place. In a perfect world, big companies like OpenAI & Google would give us the unaligned models that we could align ourselves with, but this is computationally intractable for now. So the best alternative is to have open-source models fine-tuned & aligned on open-source datasets that we can retrain or tweak if necessary. For some, all those arguments will seem pretty obvious but when I hear influents researchers advocating for a pause in the development of big LLM… I’m flabbergasted! Stopping now would be the worst time ever to do that.

@mlaprise

@JaeDukSeo
Copy link

good read

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment