Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Last active April 21, 2024 11:49
Show Gist options
  • Save mlabonne/28d31153628dccf781b74f8071c7c7e4 to your computer and use it in GitHub Desktop.
Save mlabonne/28d31153628dccf781b74f8071c7c7e4 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
ChimeraLlama-3-8B 39.12 71.81 52.4 42.98 51.58

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 27.17 ± 2.80
acc_norm 24.80 ± 2.72
agieval_logiqa_en 0 acc 35.33 ± 1.87
acc_norm 37.94 ± 1.90
agieval_lsat_ar 0 acc 21.74 ± 2.73
acc_norm 20.00 ± 2.64
agieval_lsat_lr 0 acc 40.20 ± 2.17
acc_norm 40.39 ± 2.17
agieval_lsat_rc 0 acc 55.76 ± 3.03
acc_norm 52.42 ± 3.05
agieval_sat_en 0 acc 67.48 ± 3.27
acc_norm 64.08 ± 3.35
agieval_sat_en_without_passage 0 acc 41.26 ± 3.44
acc_norm 37.86 ± 3.39
agieval_sat_math 0 acc 40.91 ± 3.32
acc_norm 35.45 ± 3.23

Average: 39.12%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 55.80 ± 1.45
acc_norm 58.36 ± 1.44
arc_easy 0 acc 83.04 ± 0.77
acc_norm 81.61 ± 0.79
boolq 1 acc 84.40 ± 0.63
hellaswag 0 acc 60.60 ± 0.49
acc_norm 79.05 ± 0.41
openbookqa 0 acc 34.40 ± 2.13
acc_norm 44.80 ± 2.23
piqa 0 acc 79.71 ± 0.94
acc_norm 81.07 ± 0.91
winogrande 0 acc 73.40 ± 1.24

Average: 71.81%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 36.47 ± 1.69
mc2 52.40 ± 1.50

Average: 52.4%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 57.37 ± 3.60
bigbench_date_understanding 0 multiple_choice_grade 69.11 ± 2.41
bigbench_disambiguation_qa 0 multiple_choice_grade 30.62 ± 2.88
bigbench_geometric_shapes 0 multiple_choice_grade 45.40 ± 2.63
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 29.80 ± 2.05
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 21.57 ± 1.56
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 51.67 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 37.00 ± 2.16
bigbench_navigate 0 multiple_choice_grade 52.70 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 65.40 ± 1.06
bigbench_ruin_names 0 multiple_choice_grade 45.76 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 26.45 ± 1.40
bigbench_snarks 0 multiple_choice_grade 55.25 ± 3.71
bigbench_sports_understanding 0 multiple_choice_grade 50.30 ± 1.59
bigbench_temporal_sequences 0 multiple_choice_grade 43.80 ± 1.57
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 22.64 ± 1.18
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 17.14 ± 0.90
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 51.67 ± 2.89

Average: 42.98%

Average score: 51.58%

Elapsed time: 03:28:31

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment