Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
phi-2-OpenHermes-2.5 | 30.27 | 71.18 | 43.87 | 35.9 | 45.3 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 18.90 | ± | 2.46 |
acc_norm | 19.69 | ± | 2.50 | ||
agieval_logiqa_en | 0 | acc | 28.73 | ± | 1.77 |
acc_norm | 31.80 | ± | 1.83 | ||
agieval_lsat_ar | 0 | acc | 19.13 | ± | 2.60 |
acc_norm | 19.57 | ± | 2.62 | ||
agieval_lsat_lr | 0 | acc | 30.20 | ± | 2.03 |
acc_norm | 28.04 | ± | 1.99 | ||
agieval_lsat_rc | 0 | acc | 37.92 | ± | 2.96 |
acc_norm | 32.71 | ± | 2.87 | ||
agieval_sat_en | 0 | acc | 52.91 | ± | 3.49 |
acc_norm | 47.57 | ± | 3.49 | ||
agieval_sat_en_without_passage | 0 | acc | 39.32 | ± | 3.41 |
acc_norm | 36.41 | ± | 3.36 | ||
agieval_sat_math | 0 | acc | 30.00 | ± | 3.10 |
acc_norm | 26.36 | ± | 2.98 |
Average: 30.27%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 50.94 | ± | 1.46 |
acc_norm | 53.24 | ± | 1.46 | ||
arc_easy | 0 | acc | 80.77 | ± | 0.81 |
acc_norm | 78.70 | ± | 0.84 | ||
boolq | 1 | acc | 84.22 | ± | 0.64 |
hellaswag | 0 | acc | 55.94 | ± | 0.50 |
acc_norm | 73.80 | ± | 0.44 | ||
openbookqa | 0 | acc | 38.80 | ± | 2.18 |
acc_norm | 50.60 | ± | 2.24 | ||
piqa | 0 | acc | 79.11 | ± | 0.95 |
acc_norm | 79.98 | ± | 0.93 | ||
winogrande | 0 | acc | 77.74 | ± | 1.17 |
Average: 71.18%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 30.72 | ± | 1.62 |
mc2 | 43.87 | ± | 1.52 |
Average: 43.87%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 61.05 | ± | 3.55 |
bigbench_date_understanding | 0 | multiple_choice_grade | 59.35 | ± | 2.56 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 40.31 | ± | 3.06 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 10.03 | ± | 1.59 |
exact_str_match | 6.69 | ± | 1.32 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 25.40 | ± | 1.95 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 16.71 | ± | 1.41 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 41.67 | ± | 2.85 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 39.40 | ± | 2.19 |
bigbench_navigate | 0 | multiple_choice_grade | 50.00 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 56.10 | ± | 1.11 |
bigbench_ruin_names | 0 | multiple_choice_grade | 26.12 | ± | 2.08 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 24.95 | ± | 1.37 |
bigbench_snarks | 0 | multiple_choice_grade | 58.56 | ± | 3.67 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 50.00 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 14.60 | ± | 1.12 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 17.52 | ± | 1.08 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 12.80 | ± | 0.80 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 41.67 | ± | 2.85 |
Average: 35.9%
Average score: 45.3%
Elapsed time: 01:23:31