Skip to content

Instantly share code, notes, and snippets.

@PurpleBooth
PurpleBooth / README-Template.md
Last active November 3, 2024 12:26
A template to make good README.md

Project Title

One Paragraph of project description goes here

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.

Prerequisites

@tomysmile
tomysmile / mac-setup-redis.md
Last active October 31, 2024 21:46
Brew install Redis on Mac

type below:

brew update
brew install redis

To have launchd start redis now and restart at login:

brew services start redis
@monkut
monkut / Ubuntu1604py36Dockerfile
Last active June 14, 2023 20:31
Base Docker image for ubuntu-16.04 & Python3.6
# docker build -t ubuntu1604py36
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update
RUN apt-get install -y build-essential python3.6 python3.6-dev python3-pip python3.6-venv
RUN apt-get install -y git
@shaypal5
shaypal5 / .travis.yml
Last active November 28, 2023 19:45
Comprehensive Python testing on Travis CI
language: python
# ===== Linux ======
os: linux
dist: xenial
python:
- 2.7
- 3.6
- 3.7
- 3.8
- 3.9

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much