Last active
March 23, 2025 20:58
-
-
Save alexweberk/635431b5c5773efd6d1755801020429f to your computer and use it in GitHub Desktop.
MLX Fine-tuning Google Gemma
@johngroves glad to hear!
You're right, thanks for pointing that out. Just updated that.
In my M1 with 8 Gb I had to change some things, the first one to go for a smaller model, for example the training should be like this
!TOKENIZERS_PARALLELISM=false python -m mlx_lm.lora
--model "google/gemma-3-1b-it"
--train
--iters 200
--data data
--adapter-path ./checkpoints
--save-every 100
--batch-size 1
--grad-checkpoint
and then loading the latest checkpoint should be like this
# Load the fine-tuned model with LoRA weights
model_lora, _ = load(
"google/gemma-3-1b-it",
adapter_path="./checkpoints", # adapters.npz is the final checkpoint saved at the end of training
)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This was super helpful!
One change might be needed: