Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created January 23, 2024 18:16
Show Gist options
  • Save mlabonne/0e6417a6a145c2ad1c36568287a3734e to your computer and use it in GitHub Desktop.
Save mlabonne/0e6417a6a145c2ad1c36568287a3734e to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Darewin-7B 45.08 75.36 60.94 47.44 57.2

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 25.59 ± 2.74
acc_norm 25.59 ± 2.74
agieval_logiqa_en 0 acc 40.55 ± 1.93
acc_norm 41.01 ± 1.93
agieval_lsat_ar 0 acc 23.04 ± 2.78
acc_norm 21.30 ± 2.71
agieval_lsat_lr 0 acc 51.37 ± 2.22
acc_norm 52.35 ± 2.21
agieval_lsat_rc 0 acc 63.57 ± 2.94
acc_norm 61.71 ± 2.97
agieval_sat_en 0 acc 76.70 ± 2.95
acc_norm 76.70 ± 2.95
agieval_sat_en_without_passage 0 acc 44.17 ± 3.47
acc_norm 45.15 ± 3.48
agieval_sat_math 0 acc 37.73 ± 3.28
acc_norm 36.82 ± 3.26

Average: 45.08%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 61.18 ± 1.42
acc_norm 63.14 ± 1.41
arc_easy 0 acc 85.19 ± 0.73
acc_norm 82.20 ± 0.78
boolq 1 acc 87.89 ± 0.57
hellaswag 0 acc 66.50 ± 0.47
acc_norm 84.54 ± 0.36
openbookqa 0 acc 36.40 ± 2.15
acc_norm 47.00 ± 2.23
piqa 0 acc 82.97 ± 0.88
acc_norm 84.39 ± 0.85
winogrande 0 acc 78.37 ± 1.16

Average: 75.36%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 44.68 ± 1.74
mc2 60.94 ± 1.54

Average: 60.94%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 56.84 ± 3.60
bigbench_date_understanding 0 multiple_choice_grade 66.12 ± 2.47
bigbench_disambiguation_qa 0 multiple_choice_grade 37.98 ± 3.03
bigbench_geometric_shapes 0 multiple_choice_grade 21.17 ± 2.16
exact_str_match 5.29 ± 1.18
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 34.60 ± 2.13
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.57 ± 1.61
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 55.67 ± 2.87
bigbench_movie_recommendation 0 multiple_choice_grade 37.80 ± 2.17
bigbench_navigate 0 multiple_choice_grade 50.00 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 71.60 ± 1.01
bigbench_ruin_names 0 multiple_choice_grade 51.12 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 37.37 ± 1.53
bigbench_snarks 0 multiple_choice_grade 75.14 ± 3.22
bigbench_sports_understanding 0 multiple_choice_grade 74.24 ± 1.39
bigbench_temporal_sequences 0 multiple_choice_grade 63.20 ± 1.53
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 23.44 ± 1.20
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 18.34 ± 0.93
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 55.67 ± 2.87

Average: 47.44%

Average score: 57.2%

Elapsed time: 02:07:51

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment