Skip to content

Instantly share code, notes, and snippets.

@shawwn
shawwn / llama-dl-dmca.md
Last active April 5, 2023 02:35
I prompted GPT-4 to draft a DMCA counterclaim to Meta's DMCA against llama-dl: https://github.com/github/dmca/blob/master/2023/03/2023-03-21-meta.md

Prompt

Meta has issued a DMCA copyright claim against llama-dl, a GitHub repository, for distributing LLaMA, a 65-billion parameter language model. Here's the full text of the DMCA claim. Based on this, draft a DMCA counterclaim on the basis that neural networks trained on public data are not copyrightable.

--

VIA EMAIL: Notice of Claimed Infringement via Email
URL: http://www.github.com
DATE: 03/20/2023

@shawwn
shawwn / syncscroll
Created April 1, 2021 03:24
Synchronized scrolling (and zooming) across browser tabs
javascript:(function()%7Bwindow.addEventListener(%20'scroll'%2C%20(e)%20%3D%3E%20%7B%20localStorage.setItem('scrollY'%2C%20scrollY)%3B%20localStorage.setItem('zoom'%2C%20document.body.style.zoom)%3B%20%7D%20)%3Bwindow.addEventListener(%20'storage'%2C%20(e)%20%3D%3E%20%7B%20if%20(e.key%20%3D%3D%3D%20%22scrollY%22)%20%7B%20y%3DparseInt(e.newValue)%3B%20window.scrollTo(0%2C%20y)%3B%20%7D%3B%20if%20(e.key%20%3D%3D%3D%20%22zoom%22)%20%7B%20document.body.style.zoom%3De.newValue%3B%20%7D%3B%20%7D%20)%7D)()
@shawwn
shawwn / hon_timestamps.txt
Last active March 30, 2023 04:44
Heroes of Newerth file timestamps. Generated with `tree -Dhf`
This file has been truncated, but you can view the full file.
[ 544 Mar 29 20:10] .
├── [ 160 Jan 18 2011] ./Abaddon Share
│   ├── [ 288 Jan 18 2011] ./Abaddon Share/CVS
│   │   ├── [ 3 Jan 18 2011] ./Abaddon Share/CVS/Entries
│   │   ├── [ 0 Jan 18 2011] ./Abaddon Share/CVS/Entries.Extra
│   │   ├── [ 0 Jan 18 2011] ./Abaddon Share/CVS/Entries.Extra.Old
│   │   ├── [ 31 Jan 18 2011] ./Abaddon Share/CVS/Entries.Log
│   │   ├── [ 0 Jan 18 2011] ./Abaddon Share/CVS/Entries.Old
│   │   ├── [ 15 Jan 18 2011] ./Abaddon Share/CVS/Repository
│   │   └── [ 45 Jan 18 2011] ./Abaddon Share/CVS/Root
@shawwn
shawwn / llama_65b_data.txt
Last active March 13, 2023 15:17
(Generated by LLaMA 65B)
I am Lieutenant Commander Data, and I am an android.
I was created by Doctor Soong in the mid-2300s on Earth's moon colony.
My positronic brain is a network of trillions of interconnected
neurons that allow me to experience consciousness and sentience as
only living beings can—and yet my mind operates at speeds far greater
than those of most unenhanced organics. This makes it possible for me
to perform complex analyses almost instantaneously while
simultaneously running thousands of background processes without any
decrease in efficiency or awareness. It also lets me communicate with
@shawwn
shawwn / example.sh
Created March 6, 2023 05:17
How I run 65B using my fork of llama at https://github.com/shawwn/llama
mp=1; size=7B; # to run 7B
mp=8; size=65B; # to run 65B
for seed in $(randint 1000000)
do
export TARGET_FOLDER=~/ml/data/llama/LLaMA
time python3 -m torch.distributed.run --nproc_per_node $mp example.py --ckpt_dir $TARGET_FOLDER/$size --tokenizer_path $TARGET_FOLDER/tokenizer.model --seed $seed --max_seq_len 2048 --max_gen_len 2048 --count 0 | tee -a ${size}_startrek.txt
done
> initializing model parallel with size 8
> initializing ddp with size 1
> initializing pipeline with size 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{"seed": 187073, "temp": 0.7, "top_p": 0.0, "top_k": 40, "repetition_penalty": 1.1764705882352942, "max_seq_len": 2048, "max_gen_len": 2048}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Loading
> initializing model parallel with size 8
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{"seed": 374894, "temp": 0.7, "top_p": 0.0, "top_k": 40, "repetition_penalty": 1.1764705882352942, "max_seq_len": 512, "max_gen_len": 511}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Loading
Loaded in 8.72 seconds
============== sample 1 =================
I believe the meaning of life is to grow, learn and give.
#!/bin/bash
# wget https://gist.githubusercontent.com/shawwn/88f64f7294c5a2e5e009d277a429ff2e/raw/tpu_setup.sh
# bash tpu_setup.sh
set -x
pip3 install --upgrade pip
# upgrade to nightly jax.
pip3 install --force-reinstall --pre -U -f https://storage.googleapis.com/jax-releases/libtpu_releases.html 'jax[tpu]' 'jaxlib'
pip3 install rich
@shawwn
shawwn / adam.py
Last active February 15, 2023 19:48
Reformulating Adam optimizer to gain an intuition about what it's doing.
def lerp(a, b, t):
return (b - a) * t + a
def bias(i, x, beta):
return 1 - jnp.asarray(beta, x.dtype) ** (i + 1)
@optimizer
def adam(step_size, b1=0.9, b2=0.999, eps=1e-8) -> OptimizerResult:
"""Construct optimizer triple for Adam.
@shawwn
shawwn / adamsp.py
Created February 9, 2023 18:44
AdamSP optimizer
def lerp(a, b, t):
return (b - a) * t + a
@optimizer
def adamsp(step_size=1e-1, b1=0.5):
"""Construct optimizer triple for AdamSP.
Args:
step_size: positive scalar, or a callable representing a step size schedule
that maps the iteration index to a positive scalar (default 1e-1).