Skip to content

Instantly share code, notes, and snippets.

@TimDettmers
Created August 10, 2023 04:57
Show Gist options
  • Save TimDettmers/a96b0d948a97583f8ab0599fa888c35c to your computer and use it in GitHub Desktop.
Save TimDettmers/a96b0d948a97583f8ab0599fa888c35c to your computer and use it in GitHub Desktop.
Hyperparameter grid search for LLaMA models on Alpaca dataset for QLoRA finetuning
This table contains data from multiple software versions. Some hyperparamter names are "NaN" meaning, they did not exist in that software version. The best 7B result is 40.08 MMLU.
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2290 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.2290 (0.0000). 95% CI (0.229, 0.229). Sample size: 2
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2410 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2520 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.2550 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2440, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 32 , per_device_train_batch_size: 2
acc mean (SE): 0.3343 (0.0296). 95% CI (0.276, 0.392). Sample size: 3
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 4880, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 16 , per_device_train_batch_size: 2
acc mean (SE): 0.3417 (0.0283). 95% CI (0.286, 0.397). Sample size: 4
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 4880, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 16 , per_device_train_batch_size: 2
acc mean (SE): 0.3440 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 800 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3478 (0.0041). 95% CI (0.340, 0.356). Sample size: 4
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3490 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 400 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3510 (0.0040). 95% CI (0.343, 0.359). Sample size: 4
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 800 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3523 (0.0048). 95% CI (0.343, 0.362). Sample size: 4
================================================================================
Config: learning_rate: 5e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 10 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 2
acc mean (SE): 0.3530 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 400 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3535 (0.0055). 95% CI (0.343, 0.364). Sample size: 4
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2440, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 32 , per_device_train_batch_size: 2
acc mean (SE): 0.3545 (0.0105). 95% CI (0.334, 0.375). Sample size: 2
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9760, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 8 , per_device_train_batch_size: 2
acc mean (SE): 0.3555 (0.0115). 95% CI (0.333, 0.378). Sample size: 2
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3570 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 400 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3580 (0.0000). 95% CI (0.358, 0.358). Sample size: 2
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 400 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3585 (0.0005). 95% CI (0.358, 0.359). Sample size: 2
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3590 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3590 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3590 (0.0010). 95% CI (0.357, 0.361). Sample size: 2
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 800 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3605 (0.0015). 95% CI (0.358, 0.363). Sample size: 2
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2400, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3610 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 8 , per_device_train_batch_size: 16
acc mean (SE): 0.3615 (0.0015). 95% CI (0.359, 0.364). Sample size: 2
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1600, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3620 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3620 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3620 (0.0000). 95% CI (0.362, 0.362). Sample size: 2
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3623 (0.0030). 95% CI (0.356, 0.368). Sample size: 12
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 16 , per_device_train_batch_size: 8
acc mean (SE): 0.3625 (0.0015). 95% CI (0.360, 0.365). Sample size: 2
================================================================================
Config: learning_rate: 2e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 32 , per_device_train_batch_size: 4
acc mean (SE): 0.3625 (0.0015). 95% CI (0.360, 0.365). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9760, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 8 , per_device_train_batch_size: 2
acc mean (SE): 0.3635 (0.0195). 95% CI (0.325, 0.402). Sample size: 2
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3640 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2440, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 32 , per_device_train_batch_size: 2
acc mean (SE): 0.3640 (0.0010). 95% CI (0.362, 0.366). Sample size: 2
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3650 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3660 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3670 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3670 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3670 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 800 , lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3675 (0.0005). 95% CI (0.367, 0.368). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3680 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3680 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3680 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 4880, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 16 , per_device_train_batch_size: 2
acc mean (SE): 0.3685 (0.0015). 95% CI (0.366, 0.371). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3690 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3690 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3690 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3690 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3690 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 3000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3700 (0.0020). 95% CI (0.366, 0.374). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3700 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3700 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3700 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3700 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3705 (0.0015). 95% CI (0.368, 0.373). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3710 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9760, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 8 , per_device_train_batch_size: 2
acc mean (SE): 0.3710 (0.0000). 95% CI (0.371, 0.371). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3710 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3710 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: fp4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3713 (0.0007). 95% CI (0.370, 0.373). Sample size: 4
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3730 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3730 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 6000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3730 (0.0000). 95% CI (0.373, 0.373). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3730 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3730 (0.0010). 95% CI (0.371, 0.375). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 13000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 6
acc mean (SE): 0.3733 (0.0032). 95% CI (0.367, 0.380). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 1200, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 64 , per_device_train_batch_size: 2
acc mean (SE): 0.3735 (0.0015). 95% CI (0.371, 0.376). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 4875, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 32
acc mean (SE): 0.3737 (0.0118). 95% CI (0.350, 0.397). Sample size: 3
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3740 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3740 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3740 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3750 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3750 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3750 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3750 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 2440, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 32 , per_device_train_batch_size: 2
acc mean (SE): 0.3760 (0.0000). 95% CI (0.376, 0.376). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3760 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3770 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3780 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3780 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3780 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3795 (0.0005). 95% CI (0.379, 0.380). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3800 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3800 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3800 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3800 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3800 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 12000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3805 (0.0015). 95% CI (0.378, 0.383). Sample size: 2
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3810 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3810 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3810 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3813 (0.0027). 95% CI (0.376, 0.387). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0005, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3820 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3830 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3830 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3830 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3830 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9760, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 8 , per_device_train_batch_size: 2
acc mean (SE): 0.3830 (0.0000). 95% CI (0.383, 0.383). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 15000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3835 (0.0015). 95% CI (0.381, 0.386). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3840 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3847 (0.0003). 95% CI (0.384, 0.385). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 6
acc mean (SE): 0.3847 (0.0012). 95% CI (0.382, 0.387). Sample size: 3
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3850 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 18000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3850 (0.0010). 95% CI (0.383, 0.387). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 13000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3853 (0.0013). 95% CI (0.383, 0.388). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3860 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3860 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3860 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3860 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3860 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 4875, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 13000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 12
acc mean (SE): 0.3870 (0.0017). 95% CI (0.384, 0.390). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3870 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 4880, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 16 , per_device_train_batch_size: 2
acc mean (SE): 0.3870 (0.0010). 95% CI (0.385, 0.389). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 9750, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3877 (0.0015). 95% CI (0.385, 0.391). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3880 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3880 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3880 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3880 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3880 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3890 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3890 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 21000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3890 (0.0020). 95% CI (0.385, 0.393). Sample size: 2
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3890 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3890 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 24000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3895 (0.0005). 95% CI (0.389, 0.390). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 27000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3900 (0.0040). 95% CI (0.382, 0.398). Sample size: 2
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3905 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3910 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3910 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3910 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3910 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 7320, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3910 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.1 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.01, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3920 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.01, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 7320, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 6 , per_device_train_batch_size: 2
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3930 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3935 (0.0005). 95% CI (0.393, 0.394). Sample size: 2
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 4875, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0005, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 1.0 , max_steps: 30000, lr_scheduler_type: <SchedulerType.COSINE: cosine>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 8
acc mean (SE): 0.3940 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3950 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3950 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3950 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3960 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3960 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3960 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3970 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.1 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3980 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.5 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3990 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3990 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.3995 (0.0005). 95% CI (0.399, 0.400). Sample size: 2
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4000 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4000 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4000 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4000 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4020 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.99, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4020 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4020 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4030 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4030 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4030 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 19500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4030 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4040 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 19500, lr_scheduler_type: <SchedulerType.LINEAR: linear>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4040 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.2 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4050 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.1 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4060 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.001, base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4070 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/7B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4080 (0.0012). 95% CI (0.406, 0.410). Sample size: 3
================================================================================
Config: learning_rate: NaN , adam_beta2: 0.999, lora_dropout: NaN , max_grad_norm: NaN , max_steps: NaN , lr_scheduler_type: NaN , weight_decay: NaN , base_model: NaN , quant_type: NaN , gradient_accumulation_steps: NaN , per_device_train_batch_size: NaN
acc mean (SE): 0.4338 (0.0480). 95% CI (0.340, 0.528). Sample size: 5
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4572 (0.0007). 95% CI (0.456, 0.459). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4587 (0.0006). 95% CI (0.457, 0.460). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4588 (0.0013). 95% CI (0.456, 0.461). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4608 (0.0014). 95% CI (0.458, 0.464). Sample size: 6
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4660 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4670 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: fp4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4680 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4688 (0.0002). 95% CI (0.469, 0.469). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4704 (0.0017). 95% CI (0.467, 0.474). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4715 (0.0004). 95% CI (0.471, 0.472). Sample size: 6
================================================================================
Config: learning_rate: 6e-05, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: fp4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4720 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4722 (0.0005). 95% CI (0.471, 0.473). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4732 (0.0005). 95% CI (0.472, 0.474). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4732 (0.0005). 95% CI (0.472, 0.474). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4735 (0.0015). 95% CI (0.471, 0.476). Sample size: 2
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4740 (0.0004). 95% CI (0.473, 0.475). Sample size: 4
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: fp4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4740 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4740 (0.0007). 95% CI (0.473, 0.475). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 3250, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4740 (0.0000). 95% CI (0.474, 0.474). Sample size: 3
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4740 (0.0012). 95% CI (0.472, 0.476). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4742 (0.0003). 95% CI (0.474, 0.475). Sample size: 6
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4747 (0.0002). 95% CI (0.474, 0.475). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 1 , lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: fp4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4751 (0.0292). 95% CI (0.418, 0.532). Sample size: 14
================================================================================
Config: learning_rate: 0.0001, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4753 (0.0011). 95% CI (0.473, 0.477). Sample size: 6
================================================================================
Config: learning_rate: 0.0003, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 6666, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 6
acc mean (SE): 0.4757 (0.0003). 95% CI (0.475, 0.476). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4758 (0.0020). 95% CI (0.472, 0.480). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4760 (0.0000). 95% CI (0.476, 0.476). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: fp4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4770 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4782 (0.0039). 95% CI (0.471, 0.486). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4783 (0.0018). 95% CI (0.475, 0.482). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4790 (0.0009). 95% CI (0.477, 0.481). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.6 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4802 (0.0006). 95% CI (0.479, 0.481). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 7000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4802 (0.0023). 95% CI (0.476, 0.485). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 1 , lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4814 (0.0284). 95% CI (0.426, 0.537). Sample size: 14
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.6 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4828 (0.0005). 95% CI (0.482, 0.484). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4850 (0.0011). 95% CI (0.483, 0.487). Sample size: 4
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.05, max_grad_norm: 0.3 , max_steps: 5000, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 8
acc mean (SE): 0.4885 (0.0009). 95% CI (0.487, 0.490). Sample size: 6
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.0 , max_grad_norm: 0.3 , max_steps: 6666, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: NaN , quant_type: nf4 , gradient_accumulation_steps: 2 , per_device_train_batch_size: 6
acc mean (SE): 0.4897 (0.0003). 95% CI (0.489, 0.490). Sample size: 3
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 6500, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/13B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.4900 (nan). 95% CI (nan, nan). Sample size: 1
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/30B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.5725 (0.0005). 95% CI (0.572, 0.573). Sample size: 2
================================================================================
Config: learning_rate: 0.0002, adam_beta2: 0.999, lora_dropout: 0.1 , max_grad_norm: 0.3 , max_steps: 9750, lr_scheduler_type: <SchedulerType.CONSTANT: constant>, weight_decay: 0.0 , base_model: /gscratch/zlab/llama/65B, quant_type: nf4 , gradient_accumulation_steps: 1 , per_device_train_batch_size: 16
acc mean (SE): 0.6300 (0.0000). 95% CI (0.630, 0.630). Sample size: 2
================================================================================
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment