Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
phixtral-4x2_8-gates-poc | 31.78 | 70.22 | 47.01 | 37.02 | 46.51 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 22.44 | ± | 2.62 |
acc_norm | 22.83 | ± | 2.64 | ||
agieval_logiqa_en | 0 | acc | 28.88 | ± | 1.78 |
acc_norm | 29.34 | ± | 1.79 | ||
agieval_lsat_ar | 0 | acc | 22.17 | ± | 2.75 |
acc_norm | 21.74 | ± | 2.73 | ||
agieval_lsat_lr | 0 | acc | 31.57 | ± | 2.06 |
acc_norm | 27.84 | ± | 1.99 | ||
agieval_lsat_rc | 0 | acc | 38.29 | ± | 2.97 |
acc_norm | 33.09 | ± | 2.87 | ||
agieval_sat_en | 0 | acc | 60.19 | ± | 3.42 |
acc_norm | 54.37 | ± | 3.48 | ||
agieval_sat_en_without_passage | 0 | acc | 40.78 | ± | 3.43 |
acc_norm | 35.92 | ± | 3.35 | ||
agieval_sat_math | 0 | acc | 30.00 | ± | 3.10 |
acc_norm | 29.09 | ± | 3.07 |
Average: 31.78%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 50.51 | ± | 1.46 |
acc_norm | 52.47 | ± | 1.46 | ||
arc_easy | 0 | acc | 79.84 | ± | 0.82 |
acc_norm | 76.77 | ± | 0.87 | ||
boolq | 1 | acc | 83.82 | ± | 0.64 |
hellaswag | 0 | acc | 55.88 | ± | 0.50 |
acc_norm | 73.77 | ± | 0.44 | ||
openbookqa | 0 | acc | 36.60 | ± | 2.16 |
acc_norm | 48.80 | ± | 2.24 | ||
piqa | 0 | acc | 79.22 | ± | 0.95 |
acc_norm | 79.65 | ± | 0.94 | ||
winogrande | 0 | acc | 76.24 | ± | 1.20 |
Average: 70.22%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 33.78 | ± | 1.66 |
mc2 | 47.01 | ± | 1.52 |
Average: 47.01%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 56.32 | ± | 3.61 |
bigbench_date_understanding | 0 | multiple_choice_grade | 58.54 | ± | 2.57 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 39.92 | ± | 3.05 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 20.06 | ± | 2.12 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 24.40 | ± | 1.92 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 15.43 | ± | 1.37 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 41.33 | ± | 2.85 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 44.20 | ± | 2.22 |
bigbench_navigate | 0 | multiple_choice_grade | 53.70 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 52.35 | ± | 1.12 |
bigbench_ruin_names | 0 | multiple_choice_grade | 32.81 | ± | 2.22 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 24.95 | ± | 1.37 |
bigbench_snarks | 0 | multiple_choice_grade | 59.67 | ± | 3.66 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 53.65 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 16.40 | ± | 1.17 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 18.88 | ± | 1.11 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 12.34 | ± | 0.79 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 41.33 | ± | 2.85 |
Average: 37.02%
Average score: 46.51%
Elapsed time: 02:23:39