Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Last active February 29, 2024 17:42
Show Gist options
  • Save mlabonne/0fb752dc3c5b578fff87a73c56a16d7a to your computer and use it in GitHub Desktop.
Save mlabonne/0fb752dc3c5b578fff87a73c56a16d7a to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
gemma-7b-it 21.33 40.84 41.7 30.25 33.53

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 22.05 ± 2.61
acc_norm 21.65 ± 2.59
agieval_logiqa_en 0 acc 23.04 ± 1.65
acc_norm 23.96 ± 1.67
agieval_lsat_ar 0 acc 21.74 ± 2.73
acc_norm 21.30 ± 2.71
agieval_lsat_lr 0 acc 16.08 ± 1.63
acc_norm 19.80 ± 1.77
agieval_lsat_rc 0 acc 18.22 ± 2.36
acc_norm 17.47 ± 2.32
agieval_sat_en 0 acc 21.36 ± 2.86
acc_norm 25.73 ± 3.05
agieval_sat_en_without_passage 0 acc 21.36 ± 2.86
acc_norm 17.96 ± 2.68
agieval_sat_math 0 acc 23.18 ± 2.85
acc_norm 22.73 ± 2.83

Average: 21.33%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 19.88 ± 1.17
acc_norm 23.89 ± 1.25
arc_easy 0 acc 33.29 ± 0.97
acc_norm 32.83 ± 0.96
boolq 1 acc 59.02 ± 0.86
hellaswag 0 acc 33.16 ± 0.47
acc_norm 38.13 ± 0.48
openbookqa 0 acc 18.20 ± 1.73
acc_norm 28.80 ± 2.03
piqa 0 acc 57.07 ± 1.15
acc_norm 55.82 ± 1.16
winogrande 0 acc 47.36 ± 1.40

Average: 40.84%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 25.34 ± 1.52
mc2 41.70 ± 1.59

Average: 41.7%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 53.68 ± 3.63
bigbench_date_understanding 0 multiple_choice_grade 23.85 ± 2.22
bigbench_disambiguation_qa 0 multiple_choice_grade 37.60 ± 3.02
bigbench_geometric_shapes 0 multiple_choice_grade 19.78 ± 2.11
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 22.40 ± 1.87
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 19.43 ± 1.50
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 36.33 ± 2.78
bigbench_movie_recommendation 0 multiple_choice_grade 31.80 ± 2.08
bigbench_navigate 0 multiple_choice_grade 50.00 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 28.00 ± 1.00
bigbench_ruin_names 0 multiple_choice_grade 25.67 ± 2.07
bigbench_salient_translation_error_detection 0 multiple_choice_grade 12.42 ± 1.04
bigbench_snarks 0 multiple_choice_grade 48.62 ± 3.73
bigbench_sports_understanding 0 multiple_choice_grade 51.01 ± 1.59
bigbench_temporal_sequences 0 multiple_choice_grade 18.20 ± 1.22
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 17.12 ± 1.07
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 12.17 ± 0.78
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 36.33 ± 2.78

Average: 30.25%

Average score: 33.53%

Elapsed time: 02:22:54

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment