Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
ChimeraLlama-3-8B-v2 | 41.01 | 71.11 | 55.48 | 44.26 | 52.96 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 29.92 | ± | 2.88 |
acc_norm | 27.56 | ± | 2.81 | ||
agieval_logiqa_en | 0 | acc | 37.48 | ± | 1.90 |
acc_norm | 37.63 | ± | 1.90 | ||
agieval_lsat_ar | 0 | acc | 21.30 | ± | 2.71 |
acc_norm | 22.17 | ± | 2.75 | ||
agieval_lsat_lr | 0 | acc | 43.73 | ± | 2.20 |
acc_norm | 42.55 | ± | 2.19 | ||
agieval_lsat_rc | 0 | acc | 59.11 | ± | 3.00 |
acc_norm | 56.88 | ± | 3.03 | ||
agieval_sat_en | 0 | acc | 67.96 | ± | 3.26 |
acc_norm | 67.48 | ± | 3.27 | ||
agieval_sat_en_without_passage | 0 | acc | 44.17 | ± | 3.47 |
acc_norm | 38.83 | ± | 3.40 | ||
agieval_sat_math | 0 | acc | 40.91 | ± | 3.32 |
acc_norm | 35.00 | ± | 3.22 |
Average: 41.01%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 53.92 | ± | 1.46 |
acc_norm | 57.68 | ± | 1.44 | ||
arc_easy | 0 | acc | 82.15 | ± | 0.79 |
acc_norm | 79.63 | ± | 0.83 | ||
boolq | 1 | acc | 84.28 | ± | 0.64 |
hellaswag | 0 | acc | 59.60 | ± | 0.49 |
acc_norm | 78.42 | ± | 0.41 | ||
openbookqa | 0 | acc | 34.60 | ± | 2.13 |
acc_norm | 44.60 | ± | 2.23 | ||
piqa | 0 | acc | 79.05 | ± | 0.95 |
acc_norm | 80.47 | ± | 0.92 | ||
winogrande | 0 | acc | 72.69 | ± | 1.25 |
Average: 71.11%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 38.43 | ± | 1.70 |
mc2 | 55.48 | ± | 1.53 |
Average: 55.48%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 58.42 | ± | 3.59 |
bigbench_date_understanding | 0 | multiple_choice_grade | 69.92 | ± | 2.39 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 32.56 | ± | 2.92 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 36.77 | ± | 2.55 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 30.20 | ± | 2.06 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 22.43 | ± | 1.58 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 58.33 | ± | 2.85 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 34.60 | ± | 2.13 |
bigbench_navigate | 0 | multiple_choice_grade | 54.30 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 64.00 | ± | 1.07 |
bigbench_ruin_names | 0 | multiple_choice_grade | 50.45 | ± | 2.36 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 26.35 | ± | 1.40 |
bigbench_snarks | 0 | multiple_choice_grade | 56.91 | ± | 3.69 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 50.41 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 53.10 | ± | 1.58 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.16 | ± | 1.18 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.37 | ± | 0.91 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 58.33 | ± | 2.85 |
Average: 44.26%
Average score: 52.96%
Elapsed time: 04:00:05