Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
deepseek-moe-16b-chat | 30.42 | 68.72 | 48.73 | 35.02 | 45.72 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 17.72 | ± | 2.40 |
acc_norm | 17.72 | ± | 2.40 | ||
agieval_logiqa_en | 0 | acc | 29.34 | ± | 1.79 |
acc_norm | 33.03 | ± | 1.84 | ||
agieval_lsat_ar | 0 | acc | 22.61 | ± | 2.76 |
acc_norm | 22.61 | ± | 2.76 | ||
agieval_lsat_lr | 0 | acc | 33.14 | ± | 2.09 |
acc_norm | 30.98 | ± | 2.05 | ||
agieval_lsat_rc | 0 | acc | 34.57 | ± | 2.91 |
acc_norm | 29.74 | ± | 2.79 | ||
agieval_sat_en | 0 | acc | 52.43 | ± | 3.49 |
acc_norm | 50.97 | ± | 3.49 | ||
agieval_sat_en_without_passage | 0 | acc | 34.95 | ± | 3.33 |
acc_norm | 31.07 | ± | 3.23 | ||
agieval_sat_math | 0 | acc | 29.09 | ± | 3.07 |
acc_norm | 27.27 | ± | 3.01 |
Average: 30.42%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 49.32 | ± | 1.46 |
acc_norm | 51.62 | ± | 1.46 | ||
arc_easy | 0 | acc | 78.28 | ± | 0.85 |
acc_norm | 74.58 | ± | 0.89 | ||
boolq | 1 | acc | 79.79 | ± | 0.70 |
hellaswag | 0 | acc | 60.78 | ± | 0.49 |
acc_norm | 78.53 | ± | 0.41 | ||
openbookqa | 0 | acc | 34.00 | ± | 2.12 |
acc_norm | 44.20 | ± | 2.22 | ||
piqa | 0 | acc | 79.98 | ± | 0.93 |
acc_norm | 80.36 | ± | 0.93 | ||
winogrande | 0 | acc | 71.98 | ± | 1.26 |
Average: 68.72%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 33.54 | ± | 1.65 |
mc2 | 48.73 | ± | 1.54 |
Average: 48.73%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 54.21 | ± | 3.62 |
bigbench_date_understanding | 0 | multiple_choice_grade | 61.79 | ± | 2.53 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 36.43 | ± | 3.00 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 9.47 | ± | 1.55 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 26.40 | ± | 1.97 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 17.29 | ± | 1.43 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 42.33 | ± | 2.86 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 37.40 | ± | 2.17 |
bigbench_navigate | 0 | multiple_choice_grade | 53.60 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 47.95 | ± | 1.12 |
bigbench_ruin_names | 0 | multiple_choice_grade | 18.30 | ± | 1.83 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 24.95 | ± | 1.37 |
bigbench_snarks | 0 | multiple_choice_grade | 51.38 | ± | 3.73 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 51.01 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 20.40 | ± | 1.27 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 20.64 | ± | 1.15 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 14.51 | ± | 0.84 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 42.33 | ± | 2.86 |
Average: 35.02%
Average score: 45.72%
Elapsed time: 03:11:24