| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---|---|---|---|---|---|
| Meta-Llama-3-8B-Instruct | 41.22 | 69.86 | 51.65 | 42.64 | 51.34 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 27.56 | ± | 2.81 |
| acc_norm | 25.20 | ± | 2.73 | ||
| agieval_logiqa_en | 0 | acc | 36.10 | ± | 1.88 |
| acc_norm | 38.40 | ± | 1.91 | ||
| agieval_lsat_ar | 0 | acc | 22.61 | ± | 2.76 |
| acc_norm | 21.30 | ± | 2.71 | ||
| agieval_lsat_lr | 0 | acc | 41.96 | ± | 2.19 |
| acc_norm | 41.57 | ± | 2.18 | ||
| agieval_lsat_rc | 0 | acc | 60.22 | ± | 2.99 |
| acc_norm | 57.25 | ± | 3.02 | ||
| agieval_sat_en | 0 | acc | 68.45 | ± | 3.25 |
| acc_norm | 66.50 | ± | 3.30 | ||
| agieval_sat_en_without_passage | 0 | acc | 44.17 | ± | 3.47 |
| acc_norm | 42.72 | ± | 3.45 | ||
| agieval_sat_math | 0 | acc | 41.82 | ± | 3.33 |
| acc_norm | 36.82 | ± | 3.26 |
Average: 41.22%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 53.16 | ± | 1.46 |
| acc_norm | 56.74 | ± | 1.45 | ||
| arc_easy | 0 | acc | 81.65 | ± | 0.79 |
| acc_norm | 79.76 | ± | 0.82 | ||
| boolq | 1 | acc | 83.09 | ± | 0.66 |
| hellaswag | 0 | acc | 57.69 | ± | 0.49 |
| acc_norm | 75.82 | ± | 0.43 | ||
| openbookqa | 0 | acc | 34.20 | ± | 2.12 |
| acc_norm | 43.20 | ± | 2.22 | ||
| piqa | 0 | acc | 78.67 | ± | 0.96 |
| acc_norm | 78.51 | ± | 0.96 | ||
| winogrande | 0 | acc | 71.90 | ± | 1.26 |
Average: 69.86%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 36.23 | ± | 1.68 |
| mc2 | 51.65 | ± | 1.52 |
Average: 51.65%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 52.63 | ± | 3.63 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 68.56 | ± | 2.42 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 32.56 | ± | 2.92 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 24.23 | ± | 2.26 |
| exact_str_match | 0.00 | ± | 0.00 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 31.60 | ± | 2.08 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 22.00 | ± | 1.57 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 60.00 | ± | 2.83 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 32.00 | ± | 2.09 |
| bigbench_navigate | 0 | multiple_choice_grade | 53.20 | ± | 1.58 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 63.65 | ± | 1.08 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 44.64 | ± | 2.35 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 25.05 | ± | 1.37 |
| bigbench_snarks | 0 | multiple_choice_grade | 53.59 | ± | 3.72 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 50.30 | ± | 1.59 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 54.30 | ± | 1.58 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.00 | ± | 1.17 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.26 | ± | 0.90 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 60.00 | ± | 2.83 |
Average: 42.64%
Average score: 51.34%
Elapsed time: 02:17:22