Skip to content

Instantly share code, notes, and snippets.

View jbdatascience's full-sized avatar

Jan Bours jbdatascience

View GitHub Profile

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@nazlialagoz
nazlialagoz / Event_Study_Medium_Article.R
Created January 1, 2023 20:23
Simulation study for the Medium article: Event Studies for Causal Inference: The Dos and Don'ts - A guide to avoiding the common pitfalls of event studies
# Simulation study for the Medium article:
# Event Studies for Causal Inference: The Dos and Don'ts -
# A guide to avoiding the common pitfalls of event studies
# This code simulates a panel dataset and then runs event studies
# Different scenarios are created to demonstrate pitfalls of event studies
# The simulaiton part of the code is adapted from Andrew Baker's awesome blog:
# https://andrewcbaker.netlify.app/2020/06/27/how-to-create-relative-time-indicators/
# Also see a relevant package and blog by Sant'Anna & Callaway:
from collections import namedtuple
MIN_WORD = 3
LETTER_TILE = "."
BLOCK_TILE = "#"
Point = namedtuple('Point', ['x', 'y'])
@skuttruf
skuttruf / frac-diff_sk
Last active October 2, 2024 12:08
Python code for fractional differencing of pandas time series
"""
Python code for fractional differencing of pandas time series
illustrating the concepts of the article "Preserving Memory in Stationary Time Series"
by Simon Kuttruf
While this code is dedicated to the public domain for use without permission, the author disclaims any liability in connection with the use of this code.
"""
import numpy as np
import pandas as pd
@devashishd12
devashishd12 / tcusescases.ipynb
Created August 21, 2016 12:09
Notebook for topic coherence use cases blog
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.