Skip to content

Instantly share code, notes, and snippets.

View gautierdag's full-sized avatar
🇺🇦

Gautier Dagan gautierdag

🇺🇦
View GitHub Profile
@rbrigden
rbrigden / startupy_domains.txt
Last active June 7, 2024 05:09
Every english word that is currently available for registration as a full domain (TLD included).
abdominocardi.ac
autotr.actor
cephalotr.actor
cocontr.actor
coen.actor
cornf.actor
counter.actor
effr.actor
idemf.actor
lithofr.actor
@jiahao87
jiahao87 / pegasus_fine_tune.py
Last active May 29, 2024 18:00
Pytorch script for fine-tuning Pegasus Large model
"""Script for fine-tuning Pegasus
Example usage:
# use XSum dataset as example, with first 1000 docs as training data
from datasets import load_dataset
dataset = load_dataset("xsum")
train_texts, train_labels = dataset['train']['document'][:1000], dataset['train']['summary'][:1000]
# use Pegasus Large model as base for fine-tuning
model_name = 'google/pegasus-large'
train_dataset, _, _, tokenizer = prepare_data(model_name, train_texts, train_labels)
@ashleve
ashleve / kfold_example.py
Last active April 24, 2024 10:54
Example of k-fold cross validation with PyTorch Lightning Datamodule
from pytorch_lightning import LightningDataModule
from torch_geometric.datasets import TUDataset
from torch_geometric.data import DataLoader
from sklearn.model_selection import KFold
class ProteinsKFoldDataModule(LightningDataModule):
def __init__(
self,
data_dir: str = "data/",
import numpy as np
def slerp(p0, p1, n):
# https://en.wikipedia.org/wiki/Slerp
norm = np.linalg.norm(p0) * np.linalg.norm(p1)
dot = np.sum(p0 * p1 / norm)
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
interp = []
for t in np.linspace(0.0, 1.0, n):
import torch
import math
from torch import Tensor
from typing import Optional
def get_relative_positional_encoding(length1:int, length2:int, d_model:int, device:torch.device):
xs = torch.arange(length1, device=device).unsqueeze(1)
ys = torch.arange(length2, device=device).unsqueeze(0)

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@xenova
xenova / tiktoken-to-hf.ipynb
Last active May 10, 2024 00:59
Convert tiktoken tokenizers to the Hugging Face tokenizers format
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@raphael-sch
raphael-sch / run_padding_prefill.py
Last active March 21, 2024 14:49
Using padding and prefill during inference in huggingface transformers
import re
import sys
import time
import tqdm
import torch
from datasets import load_dataset, concatenate_datasets
from transformers import AutoTokenizer, LlamaForCausalLM