Skip to content

Instantly share code, notes, and snippets.

Last active February 6, 2023 21:26
What would you like to do?
Reinforcement Learning from Human Feedback (RLHF) - a simplified explanation

Maybe you've heard about this technique but you haven't completely understood it, especially the PPO part. This explanation might help.

We will focus on text-to-text language models 📝, such as GPT-3, BLOOM, and T5. Models like BERT, which are encoder-only, are not addressed.

Reinforcement Learning from Human Feedback (RLHF) has been successfully applied in ChatGPT, hence its major increase in popularity. 📈

RLHF is especially useful in two scenarios 🌟:

  • You can’t create a good loss function
    • Example: how do you calculate a metric to measure if the model’s output was funny?
  • You want to train with production data, but you can’t easily label your production data
    • Example: how do you get labeled production data from ChatGPT? Someone needs to write the correct answer that ChatGPT should have answered

RLHF algorithm ⚙️:

  1. Pretraining a language model (LM)
  2. Training a reward model
  3. Fine-tuning the LM with RL

1 - Pretraining a language model (LM)

In this step, you need to either train one language model from scratch or just use a pretrained one like GPT-3.

Once you have that pretrained language model, you can also do an extra optional step, called Supervised Fine-Tuning (STF). This is nothing more than getting some human-labeled (input, output) text pairs and fine-tuning the language model you have. STF is considered high-quality initialization for RLHF.

At the end of this step, we end up with our trained LM which is our main model, and the one we want to train further with RLHF.


Figure 1: Our pretrained language model.

2 - Training a reward model

In this step, we are interested in collecting a dataset of (input text, output text, reward) triplets.

In Figure 2, there's a representation of the data collection pipeline: using input text data (if production data, better), pass it through your model, and have a human attribute a reward to the generated output text.


Figure 2: Pipeline to collect data for reward model training.

The reward is usually an integer between 0-5, but it can be a simple 0/1 in a 👍/👎 experience.


Figure 3: Simple 👍/👎 reward collection in ChatGPT.


Figure 4: A more complete reward collection experience: the model outputs two texts and the human has to choose which one was better, and also give an overall rating with comments.

With this new dataset, we will train another language model to receive the (input, output) text and return a reward scalar! This will be our reward model.

The main objective here is to use the reward model to mimic the human's reward labeling and therefore be able to do RLHF training offline, without the human in the loop.


Figure 5: The trained reward model, that will mimic the rewards given by humans.

3 - Fine-tuning the LM with RL

It's in this step that magic really happens and RL comes into play.

The objective of this step is to use the rewards given by the reward model to train the main model, your trained LM. However, since the reward will not be differentiable, we will need to use RL to be able to construct a loss that we can backpropagate to the LM.


Figure 6: Fine-tuning the main LM using the reward model and the PPO loss calculation.

At the beginning of the pipeline, we will make an exact copy of our LM and freeze its trainable weights. This copy of the model will help to prevent the trainable LM from completely changing its weights and starting outputting gibberish text to full the reward model.

That is why we calculate the KL divergence loss between text output probabilities of both the frozen and non-frozen LM.

This KL loss is added to the reward that is produced by the reward model. Actually, if you are training your model while in production (online learning), you can replace this reward model with the human reward score directly. 💡

Having your reward and KL loss, we can now apply RL to make the reward loss differentiable.

Why isn't the reward differentiable? Because it was calculated with a reward model that received text as input. This text is obtained by decoding the output log probabilities of the LM. This decoding process is non-differentiable.

To make the loss differentiable, finally Proximal Policy Optimization (PPO) comes into play! Let's zoom in.


Figure 7: Zoom-in on the RL Update box - PPO loss calculation.

The PPO algorithm calculates a loss (that will be used to make a small update on the LM) like this:

  1. Make "Initial probs" equal to "New probs" to initialize.
  2. Calculate a ratio between the new and initial output text probabilities.
  3. Calculate the loss given the formula loss = min(ratio * R, clip(ratio, 0.8, 1.2) * R), where R is the reward + KL (or a weighted average like 0.8 * reward + 0.2 * KL) previously computed and clip(ratio, 0.8, 1.2) is just bounding the ratio to be 0.8 <= ratio <= 1.2. Note that 0.8/1.2 are just commonly used hyperparameter values that are simplified here.
  4. Update the weights of the LM by backpropagating the loss.
  5. Calculate the "New probs" (i.e., new output text probabilities) with the newly updated LM.
  6. Repeat from step 2 up to N times (usually, N=4).

That's it, this is how you use RLHF in text-to-text language models!

Things can get more complicated because there are also other losses that you can add to this base loss that I presented, but this is the core implementation.

Copy link

colobas commented Jan 18, 2023

Good stuff!

Copy link

JHenzi commented Jan 19, 2023

I have an app I'm building and going to release here on the 'hub... I want to get it to fine-tune itself with a very similar approach. I'm storing and giving the user three options, I'm using the data of the saved offering to create the next 'step'. As we progress I'd love to have RL explore the parameter space depending on the selections. I'm running three distinct inference tasks, I'd love to just implement this on one or consider this approach for tuning-as-you-go on approaching the "optimal" k, temperature and etc, combinations humans love.

Copy link

lbbui commented Jan 19, 2023

Nice write-up! Do you have any idea of the quantity of reward model data (and human labels) are needed to effectively train such a model?

Copy link

Nice write-up! Do you have any idea of the quantity of reward model data (and human labels) are needed to effectively train such a model?

Thank you and great question!
All I can say is: definitely way less than the data needed to pretrain the LM and it depends on the dataset/task!
Here are some numbers that I got:

Copy link

Is the final reward, R, recomputed at each step during PPO, or is it fixed after running through the preference model before the first step?

Copy link

R is fixed! You can add other loss terms to the "KL + R" sum though, I just presented the common base of RLHF

Copy link

gkamer8 commented Feb 6, 2023

I found this to be way way better than most write-ups out there, and so much more concise.

A question- how exactly is the output of the LM influencing the loss in PPO? We have probability distributions for tokens for each spot in a sequence. We know (?) the sequence that was ultimately sampled (?) and fed to the reward model. We can't (?) backpropogate through the reward model ... - this is the main question I've had after reading a bunch of posts like this. Also- should the "loss" really be a "loss" - it looks like here we're trying to maximize it, through stochastic gradient ascent?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment