Skip to content

Instantly share code, notes, and snippets.

@jphme
Created July 26, 2023 09:23
Show Gist options
  • Save jphme/b4b765eab54f5398d375ba2c810622da to your computer and use it in GitHub Desktop.
Save jphme/b4b765eab54f5398d375ba2c810622da to your computer and use it in GitHub Desktop.
Finetuning Settings Llama2 Chat 13B German
val_set_size: 0.03
adapter: qlora
lora_model_dir:
sequence_len: 4096
max_packed_sequence_len: 4096
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
batch_size: 8
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.00005
train_on_inputs: true
group_by_length: true
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 10
eval_steps: 120
save_steps: 500
debug:
deepspeed:
weight_decay: 0.01
fsdp:
fsdp_config:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment