Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created January 9, 2024 22:00
Show Gist options
  • Save mlabonne/99735641e5ed5a86907e617a2cc86b0c to your computer and use it in GitHub Desktop.
Save mlabonne/99735641e5ed5a86907e617a2cc86b0c to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
phi-2 27.98 70.8 44.43 35.21 44.61

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 20.87 ± 2.55
acc_norm 22.05 ± 2.61
agieval_logiqa_en 0 acc 25.65 ± 1.71
acc_norm 32.26 ± 1.83
agieval_lsat_ar 0 acc 22.17 ± 2.75
acc_norm 20.43 ± 2.66
agieval_lsat_lr 0 acc 29.02 ± 2.01
acc_norm 28.63 ± 2.00
agieval_lsat_rc 0 acc 28.25 ± 2.75
acc_norm 28.62 ± 2.76
agieval_sat_en 0 acc 49.03 ± 3.49
acc_norm 34.95 ± 3.33
agieval_sat_en_without_passage 0 acc 39.32 ± 3.41
acc_norm 29.61 ± 3.19
agieval_sat_math 0 acc 30.91 ± 3.12
acc_norm 27.27 ± 3.01

Average: 27.98%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 53.07 ± 1.46
acc_norm 54.01 ± 1.46
arc_easy 0 acc 79.88 ± 0.82
acc_norm 78.37 ± 0.84
boolq 1 acc 83.36 ± 0.65
hellaswag 0 acc 55.80 ± 0.50
acc_norm 73.72 ± 0.44
openbookqa 0 acc 40.40 ± 2.20
acc_norm 51.40 ± 2.24
piqa 0 acc 78.78 ± 0.95
acc_norm 79.27 ± 0.95
winogrande 0 acc 75.45 ± 1.21

Average: 70.8%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 30.84 ± 1.62
mc2 44.43 ± 1.51

Average: 44.43%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 54.21 ± 3.62
bigbench_date_understanding 0 multiple_choice_grade 59.35 ± 2.56
bigbench_disambiguation_qa 0 multiple_choice_grade 36.82 ± 3.01
bigbench_geometric_shapes 0 multiple_choice_grade 10.03 ± 1.59
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 23.40 ± 1.90
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 17.00 ± 1.42
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 40.33 ± 2.84
bigbench_movie_recommendation 0 multiple_choice_grade 41.40 ± 2.20
bigbench_navigate 0 multiple_choice_grade 50.30 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 52.15 ± 1.12
bigbench_ruin_names 0 multiple_choice_grade 27.46 ± 2.11
bigbench_salient_translation_error_detection 0 multiple_choice_grade 25.25 ± 1.38
bigbench_snarks 0 multiple_choice_grade 58.56 ± 3.67
bigbench_sports_understanding 0 multiple_choice_grade 50.71 ± 1.59
bigbench_temporal_sequences 0 multiple_choice_grade 15.60 ± 1.15
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 18.16 ± 1.09
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 12.69 ± 0.80
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 40.33 ± 2.84

Average: 35.21%

Average score: 44.61% Elapsed time: 03:23:31

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment