Skip to content

Instantly share code, notes, and snippets.

@LiutongZhou
Last active July 11, 2023 15:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save LiutongZhou/9db08ddd9fa6f52553449fa8b72e19d7 to your computer and use it in GitHub Desktop.
Save LiutongZhou/9db08ddd9fa6f52553449fa8b72e19d7 to your computer and use it in GitHub Desktop.
Memory Efficient Training of LLMs

Memory Efficient Training

Optimizers

Short Name Full Name URL
CAME Confidence-guided Adaptive Memory Efficient Optimization (ACL23 outstanding paper award) https://github.com/huawei-noah/Pretrained-Language-Model/blob/master/CAME/came.py
LOMO LOw-Memory Optimization https://github.com/OpenLMLab/LOMO/blob/main/src/lomo.py

Low Rank Decomposition

LORA (Low Rank Adaption): https://github.com/microsoft/LoRA or https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py

Adapters

Quantization

QLORA: https://github.com/artidoro/qlora/blob/main/qlora.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment