Skip to content

Instantly share code, notes, and snippets.

@gblazex
Created January 8, 2024 20:38
Show Gist options
  • Save gblazex/4a60c0429d5697582e72cc909ee3988d to your computer and use it in GitHub Desktop.
Save gblazex/4a60c0429d5697582e72cc909ee3988d to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
zephyr-7b-beta 37.33 71.83 55.1 39.7 50.99

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 21.26 ± 2.57
acc_norm 20.47 ± 2.54
agieval_logiqa_en 0 acc 33.33 ± 1.85
acc_norm 35.48 ± 1.88
agieval_lsat_ar 0 acc 23.04 ± 2.78
acc_norm 23.91 ± 2.82
agieval_lsat_lr 0 acc 41.76 ± 2.19
acc_norm 40.20 ± 2.17
agieval_lsat_rc 0 acc 48.70 ± 3.05
acc_norm 46.10 ± 3.04
agieval_sat_en 0 acc 60.19 ± 3.42
acc_norm 59.71 ± 3.43
agieval_sat_en_without_passage 0 acc 45.63 ± 3.48
acc_norm 43.20 ± 3.46
agieval_sat_math 0 acc 31.82 ± 3.15
acc_norm 29.55 ± 3.08

Average: 37.33%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 57.17 ± 1.45
acc_norm 60.15 ± 1.43
arc_easy 0 acc 81.36 ± 0.80
acc_norm 79.00 ± 0.84
boolq 1 acc 84.95 ± 0.63
hellaswag 0 acc 63.96 ± 0.48
acc_norm 82.10 ± 0.38
openbookqa 0 acc 32.60 ± 2.10
acc_norm 42.80 ± 2.21
piqa 0 acc 81.12 ± 0.91
acc_norm 81.28 ± 0.91
winogrande 0 acc 72.53 ± 1.25

Average: 71.83%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 38.8 ± 1.71
mc2 55.1 ± 1.58

Average: 55.1%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 57.89 ± 3.59
bigbench_date_understanding 0 multiple_choice_grade 70.73 ± 2.37
bigbench_disambiguation_qa 0 multiple_choice_grade 50.00 ± 3.12
bigbench_geometric_shapes 0 multiple_choice_grade 18.66 ± 2.06
exact_str_match 6.69 ± 1.32
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 24.60 ± 1.93
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 18.00 ± 1.45
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 44.00 ± 2.87
bigbench_movie_recommendation 0 multiple_choice_grade 31.00 ± 2.07
bigbench_navigate 0 multiple_choice_grade 54.80 ± 1.57
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 65.00 ± 1.07
bigbench_ruin_names 0 multiple_choice_grade 31.47 ± 2.20
bigbench_salient_translation_error_detection 0 multiple_choice_grade 22.04 ± 1.31
bigbench_snarks 0 multiple_choice_grade 52.49 ± 3.72
bigbench_sports_understanding 0 multiple_choice_grade 59.13 ± 1.57
bigbench_temporal_sequences 0 multiple_choice_grade 31.40 ± 1.47
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 22.64 ± 1.18
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.69 ± 0.89
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 44.00 ± 2.87

Average: 39.7%

Average score: 50.99%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment