Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
OmniTruthyBeagle-7B | 45.65 | 77.22 | 75.77 | 50.21 | 62.21 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 28.35 | ± | 2.83 |
acc_norm | 26.38 | ± | 2.77 | ||
agieval_logiqa_en | 0 | acc | 38.86 | ± | 1.91 |
acc_norm | 38.86 | ± | 1.91 | ||
agieval_lsat_ar | 0 | acc | 24.78 | ± | 2.85 |
acc_norm | 23.91 | ± | 2.82 | ||
agieval_lsat_lr | 0 | acc | 54.12 | ± | 2.21 |
acc_norm | 54.71 | ± | 2.21 | ||
agieval_lsat_rc | 0 | acc | 66.17 | ± | 2.89 |
acc_norm | 65.80 | ± | 2.90 | ||
agieval_sat_en | 0 | acc | 79.13 | ± | 2.84 |
acc_norm | 79.13 | ± | 2.84 | ||
agieval_sat_en_without_passage | 0 | acc | 45.63 | ± | 3.48 |
acc_norm | 44.17 | ± | 3.47 | ||
agieval_sat_math | 0 | acc | 35.00 | ± | 3.22 |
acc_norm | 32.27 | ± | 3.16 |
Average: 45.65%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 66.55 | ± | 1.38 |
acc_norm | 68.26 | ± | 1.36 | ||
arc_easy | 0 | acc | 87.12 | ± | 0.69 |
acc_norm | 82.66 | ± | 0.78 | ||
boolq | 1 | acc | 87.74 | ± | 0.57 |
hellaswag | 0 | acc | 69.01 | ± | 0.46 |
acc_norm | 87.04 | ± | 0.34 | ||
openbookqa | 0 | acc | 38.80 | ± | 2.18 |
acc_norm | 49.00 | ± | 2.24 | ||
piqa | 0 | acc | 82.97 | ± | 0.88 |
acc_norm | 85.36 | ± | 0.82 | ||
winogrande | 0 | acc | 80.51 | ± | 1.11 |
Average: 77.22%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 61.20 | ± | 1.71 |
mc2 | 75.77 | ± | 1.41 |
Average: 75.77%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 57.89 | ± | 3.59 |
bigbench_date_understanding | 0 | multiple_choice_grade | 63.41 | ± | 2.51 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 52.33 | ± | 3.12 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 23.96 | ± | 2.26 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 34.80 | ± | 2.13 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 25.29 | ± | 1.64 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 60.00 | ± | 2.83 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 54.40 | ± | 2.23 |
bigbench_navigate | 0 | multiple_choice_grade | 56.50 | ± | 1.57 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 69.50 | ± | 1.03 |
bigbench_ruin_names | 0 | multiple_choice_grade | 55.58 | ± | 2.35 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 40.88 | ± | 1.56 |
bigbench_snarks | 0 | multiple_choice_grade | 74.59 | ± | 3.25 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 75.86 | ± | 1.36 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 55.70 | ± | 1.57 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 23.12 | ± | 1.19 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 19.94 | ± | 0.96 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 60.00 | ± | 2.83 |
Average: 50.21%
Average score: 62.21%
Elapsed time: 02:51:13