Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save mlabonne/2f8c2dd887dd06b1b229b974b88c124b to your computer and use it in GitHub Desktop.
Save mlabonne/2f8c2dd887dd06b1b229b974b88c124b to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Phi-3-mini-4k-instruct 44.44 71.88 57.77 41.9 54

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 29.13 ± 2.86
acc_norm 28.74 ± 2.85
agieval_logiqa_en 0 acc 42.86 ± 1.94
acc_norm 44.39 ± 1.95
agieval_lsat_ar 0 acc 20.00 ± 2.64
acc_norm 21.74 ± 2.73
agieval_lsat_lr 0 acc 52.55 ± 2.21
acc_norm 49.80 ± 2.22
agieval_lsat_rc 0 acc 60.59 ± 2.98
acc_norm 58.74 ± 3.01
agieval_sat_en 0 acc 76.21 ± 2.97
acc_norm 76.21 ± 2.97
agieval_sat_en_without_passage 0 acc 45.63 ± 3.48
acc_norm 42.23 ± 3.45
agieval_sat_math 0 acc 39.55 ± 3.30
acc_norm 33.64 ± 3.19

Average: 44.44%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 55.46 ± 1.45
acc_norm 57.59 ± 1.44
arc_easy 0 acc 83.21 ± 0.77
acc_norm 80.18 ± 0.82
boolq 1 acc 86.24 ± 0.60
hellaswag 0 acc 60.57 ± 0.49
acc_norm 78.45 ± 0.41
openbookqa 0 acc 38.60 ± 2.18
acc_norm 46.80 ± 2.23
piqa 0 acc 80.30 ± 0.93
acc_norm 80.25 ± 0.93
winogrande 0 acc 73.64 ± 1.24

Average: 71.88%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 39.17 ± 1.71
mc2 57.77 ± 1.54

Average: 57.77%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 55.79 ± 3.61
bigbench_date_understanding 0 multiple_choice_grade 66.12 ± 2.47
bigbench_disambiguation_qa 0 multiple_choice_grade 30.62 ± 2.88
bigbench_geometric_shapes 0 multiple_choice_grade 13.65 ± 1.81
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 32.00 ± 2.09
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 18.14 ± 1.46
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 49.33 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 26.00 ± 1.96
bigbench_navigate 0 multiple_choice_grade 58.70 ± 1.56
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 70.10 ± 1.02
bigbench_ruin_names 0 multiple_choice_grade 40.18 ± 2.32
bigbench_salient_translation_error_detection 0 multiple_choice_grade 27.45 ± 1.41
bigbench_snarks 0 multiple_choice_grade 71.27 ± 3.37
bigbench_sports_understanding 0 multiple_choice_grade 59.63 ± 1.56
bigbench_temporal_sequences 0 multiple_choice_grade 52.10 ± 1.58
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 18.80 ± 1.11
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 14.91 ± 0.85
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 49.33 ± 2.89

Average: 41.9%

Average score: 54.0%

Elapsed time: 00:58:43

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment