Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
mistral-ft-optimized-1218 | 44.74 | 75.6 | 59.89 | 47.17 | 56.85 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 25.20 | ± | 2.73 |
acc_norm | 24.02 | ± | 2.69 | ||
agieval_logiqa_en | 0 | acc | 39.32 | ± | 1.92 |
acc_norm | 40.40 | ± | 1.92 | ||
agieval_lsat_ar | 0 | acc | 24.35 | ± | 2.84 |
acc_norm | 23.48 | ± | 2.80 | ||
agieval_lsat_lr | 0 | acc | 52.16 | ± | 2.21 |
acc_norm | 52.75 | ± | 2.21 | ||
agieval_lsat_rc | 0 | acc | 62.45 | ± | 2.96 |
acc_norm | 59.85 | ± | 2.99 | ||
agieval_sat_en | 0 | acc | 78.16 | ± | 2.89 |
acc_norm | 77.67 | ± | 2.91 | ||
agieval_sat_en_without_passage | 0 | acc | 47.57 | ± | 3.49 |
acc_norm | 46.12 | ± | 3.48 | ||
agieval_sat_math | 0 | acc | 34.09 | ± | 3.20 |
acc_norm | 33.64 | ± | 3.19 |
Average: 44.74%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 59.73 | ± | 1.43 |
acc_norm | 63.23 | ± | 1.41 | ||
arc_easy | 0 | acc | 85.48 | ± | 0.72 |
acc_norm | 83.92 | ± | 0.75 | ||
boolq | 1 | acc | 88.20 | ± | 0.56 |
hellaswag | 0 | acc | 65.92 | ± | 0.47 |
acc_norm | 84.54 | ± | 0.36 | ||
openbookqa | 0 | acc | 35.60 | ± | 2.14 |
acc_norm | 46.80 | ± | 2.23 | ||
piqa | 0 | acc | 82.26 | ± | 0.89 |
acc_norm | 84.22 | ± | 0.85 | ||
winogrande | 0 | acc | 78.30 | ± | 1.16 |
Average: 75.6%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 43.08 | ± | 1.73 |
mc2 | 59.89 | ± | 1.52 |
Average: 59.89%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 57.89 | ± | 3.59 |
bigbench_date_understanding | 0 | multiple_choice_grade | 66.94 | ± | 2.45 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 39.53 | ± | 3.05 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 22.84 | ± | 2.22 |
exact_str_match | 3.90 | ± | 1.02 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 33.80 | ± | 2.12 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 22.57 | ± | 1.58 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 55.67 | ± | 2.87 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 40.60 | ± | 2.20 |
bigbench_navigate | 0 | multiple_choice_grade | 52.00 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 70.75 | ± | 1.02 |
bigbench_ruin_names | 0 | multiple_choice_grade | 51.34 | ± | 2.36 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 35.47 | ± | 1.52 |
bigbench_snarks | 0 | multiple_choice_grade | 74.03 | ± | 3.27 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 73.73 | ± | 1.40 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 54.90 | ± | 1.57 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.80 | ± | 1.19 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 18.57 | ± | 0.93 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 55.67 | ± | 2.87 |
Average: 47.17%
Average score: 56.85%