Skip to content

Instantly share code, notes, and snippets.

@shawwn
Created March 6, 2023 05:17
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save shawwn/65a2ae80c8c0369a8cb8ff24d88d4f80 to your computer and use it in GitHub Desktop.
Save shawwn/65a2ae80c8c0369a8cb8ff24d88d4f80 to your computer and use it in GitHub Desktop.
How I run 65B using my fork of llama at https://github.com/shawwn/llama
mp=1; size=7B; # to run 7B
mp=8; size=65B; # to run 65B
for seed in $(randint 1000000)
do
export TARGET_FOLDER=~/ml/data/llama/LLaMA
time python3 -m torch.distributed.run --nproc_per_node $mp example.py --ckpt_dir $TARGET_FOLDER/$size --tokenizer_path $TARGET_FOLDER/tokenizer.model --seed $seed --max_seq_len 2048 --max_gen_len 2048 --count 0 | tee -a ${size}_startrek.txt
done
@Pliman
Copy link

Pliman commented Mar 12, 2023

I fixed it in meta-llama/llama#180

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment