Skip to content

Instantly share code, notes, and snippets.

@ChrisHayduk
ChrisHayduk / merge_qlora_with_quantized_model.py
Last active June 5, 2024 20:39
Merging QLoRA weights with quantized model
"""
The code below combines approaches published by both @eugene-yh and @jinyongyoo on Github.
Thanks for the contributions guys!
"""
import torch
import peft
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 22, 2024 08:47
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active May 14, 2024 05:46
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@crucialfelix
crucialfelix / 2023-01-13
Last active October 8, 2023 20:37
Get Chrome history for a single day and create a markdown file summarizing browsing activity
# [[2023-01-13]] log
## URLs
- <strong>www.amazon.de</strong>
- [Prime Video - Video on Demand - Online-Videothek: Filme und Serien online ansehen oder als Einzelabruf online leihen oder kaufen](https://www.amazon.de/Amazon-Video/b/?node=3010075031&ref=atv_surl_aiv&redirectToCMP=1) /Amazon-Video/b/
- <strong>www.youtube.com</strong>
- [Parwal vs Kundru | कुंदरु या परवल | Pointed Gourd Vs Ivy Gourd | Everyday Life # 267 - YouTube](https://www.youtube.com/watch?v=6v4XD9T9-Rg&themeRefresh=1) /watch
- [YouTube](https://www.youtube.com/) /
@gvolpe
gvolpe / IsUUID.scala
Last active July 22, 2022 16:17
Scala 3 custom newtypes
import java.util.UUID
import monocle.Iso
trait IsUUID[A]:
def iso: Iso[UUID, A]
object IsUUID:
def apply[A](using ev: IsUUID[A]): IsUUID[A] = ev

Understanding Comparative Benchmarks

I'm going to do something that I don't normally do, which is to say I'm going to talk about comparative benchmarks. In general, I try to confine performance discussion to absolute metrics as much as possible, or comparisons to other well-defined neutral reference points. This is precisely why Cats Effect's readme mentions a comparison to a fixed thread pool, rather doing comparisons with other asynchronous runtimes like Akka or ZIO. Comparisons in general devolve very quickly into emotional marketing.

But, just once, today we're going to talk about the emotional marketing. In particular, we're going to look at Cats Effect 3 and ZIO 2. Now, for context, as of this writing ZIO 2 has released their first milestone; they have not released a final 2.0 version. This implies straight off the bat that we're comparing apples to oranges a bit, since Cats Effect 3 has been out and in production for months. However, there has been a post going around which cites various compar

{-# language OverloadedLists #-}
{-# OPTIONS_GHC -Wall #-}
module Overlays where
import Prelude hiding ((.))
import GHC.Stack
import Debug.Trace
@jaspervdj
jaspervdj / README.md
Last active April 13, 2023 13:59
ZuriHac Calendar

Fibers

Fibers are an abstraction over sequential computation, similar to threads but at a higher level. There are two ways to think about this model: by example, and abstractly from first principles. We'll start with the example.

(credit here is very much due to Fabio Labella, who's incredible Scala World talk describes these ideas far better than I can)

Callback Sequentialization

Consider the following three functions