Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created January 6, 2024 21:20
Show Gist options
  • Save mlabonne/64891517f7026cfc84107021859b5a5f to your computer and use it in GitHub Desktop.
Save mlabonne/64891517f7026cfc84107021859b5a5f to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
CatMarcoro14-7B-slerp 45.21 75.91 63.81 47.31 58.06

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 25.98 ± 2.76
acc_norm 24.02 ± 2.69
agieval_logiqa_en 0 acc 40.55 ± 1.93
acc_norm 41.01 ± 1.93
agieval_lsat_ar 0 acc 26.52 ± 2.92
acc_norm 24.78 ± 2.85
agieval_lsat_lr 0 acc 51.96 ± 2.21
acc_norm 53.14 ± 2.21
agieval_lsat_rc 0 acc 65.80 ± 2.90
acc_norm 63.57 ± 2.94
agieval_sat_en 0 acc 79.13 ± 2.84
acc_norm 79.13 ± 2.84
agieval_sat_en_without_passage 0 acc 44.66 ± 3.47
acc_norm 44.66 ± 3.47
agieval_sat_math 0 acc 33.64 ± 3.19
acc_norm 31.36 ± 3.14

Average: 45.21%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 61.86 ± 1.42
acc_norm 63.31 ± 1.41
arc_easy 0 acc 86.07 ± 0.71
acc_norm 83.75 ± 0.76
boolq 1 acc 88.23 ± 0.56
hellaswag 0 acc 67.03 ± 0.47
acc_norm 85.28 ± 0.35
openbookqa 0 acc 36.20 ± 2.15
acc_norm 47.60 ± 2.24
piqa 0 acc 82.70 ± 0.88
acc_norm 84.44 ± 0.85
winogrande 0 acc 78.77 ± 1.15

Average: 75.91%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 46.63 ± 1.75
mc2 63.81 ± 1.51

Average: 63.81%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 56.84 ± 3.60
bigbench_date_understanding 0 multiple_choice_grade 67.21 ± 2.45
bigbench_disambiguation_qa 0 multiple_choice_grade 44.96 ± 3.10
bigbench_geometric_shapes 0 multiple_choice_grade 21.17 ± 2.16
exact_str_match 0.84 ± 0.48
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 30.20 ± 2.06
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.29 ± 1.60
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 54.00 ± 2.88
bigbench_movie_recommendation 0 multiple_choice_grade 42.80 ± 2.21
bigbench_navigate 0 multiple_choice_grade 53.30 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 71.60 ± 1.01
bigbench_ruin_names 0 multiple_choice_grade 54.69 ± 2.35
bigbench_salient_translation_error_detection 0 multiple_choice_grade 32.26 ± 1.48
bigbench_snarks 0 multiple_choice_grade 73.48 ± 3.29
bigbench_sports_understanding 0 multiple_choice_grade 74.04 ± 1.40
bigbench_temporal_sequences 0 multiple_choice_grade 55.10 ± 1.57
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 23.44 ± 1.20
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 19.26 ± 0.94
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 54.00 ± 2.88

Average: 47.31%

Average score: 58.06%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment