Skip to content

Instantly share code, notes, and snippets.

@gblazex
Created January 8, 2024 00:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save gblazex/fef994a12cd611fa006feeb8c2485565 to your computer and use it in GitHub Desktop.
Save gblazex/fef994a12cd611fa006feeb8c2485565 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Starling-LM-7B-alpha 42.06 72.72 47.33 42.53 51.16

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 24.80 ± 2.72
acc_norm 25.98 ± 2.76
agieval_logiqa_en 0 acc 38.25 ± 1.91
acc_norm 39.78 ± 1.92
agieval_lsat_ar 0 acc 25.22 ± 2.87
acc_norm 23.48 ± 2.80
agieval_lsat_lr 0 acc 48.43 ± 2.22
acc_norm 45.69 ± 2.21
agieval_lsat_rc 0 acc 60.22 ± 2.99
acc_norm 57.62 ± 3.02
agieval_sat_en 0 acc 76.21 ± 2.97
acc_norm 72.82 ± 3.11
agieval_sat_en_without_passage 0 acc 41.26 ± 3.44
acc_norm 38.35 ± 3.40
agieval_sat_math 0 acc 39.55 ± 3.30
acc_norm 32.73 ± 3.17

Average: 42.06%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 57.08 ± 1.45
acc_norm 60.24 ± 1.43
arc_easy 0 acc 83.59 ± 0.76
acc_norm 81.90 ± 0.79
boolq 1 acc 86.79 ± 0.59
hellaswag 0 acc 64.26 ± 0.48
acc_norm 82.22 ± 0.38
openbookqa 0 acc 29.40 ± 2.04
acc_norm 41.60 ± 2.21
piqa 0 acc 81.56 ± 0.90
acc_norm 83.57 ± 0.86
winogrande 0 acc 72.69 ± 1.25

Average: 72.72%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 31.58 ± 1.63
mc2 47.33 ± 1.52

Average: 47.33%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 62.11 ± 3.53
bigbench_date_understanding 0 multiple_choice_grade 64.23 ± 2.50
bigbench_disambiguation_qa 0 multiple_choice_grade 50.39 ± 3.12
bigbench_geometric_shapes 0 multiple_choice_grade 20.06 ± 2.12
exact_str_match 4.46 ± 1.09
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 27.60 ± 2.00
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 18.43 ± 1.47
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 50.00 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 36.60 ± 2.16
bigbench_navigate 0 multiple_choice_grade 50.50 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 66.95 ± 1.05
bigbench_ruin_names 0 multiple_choice_grade 49.11 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 15.03 ± 1.13
bigbench_snarks 0 multiple_choice_grade 68.51 ± 3.46
bigbench_sports_understanding 0 multiple_choice_grade 65.21 ± 1.52
bigbench_temporal_sequences 0 multiple_choice_grade 30.90 ± 1.46
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 23.20 ± 1.19
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.80 ± 0.89
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 50.00 ± 2.89

Average: 42.53%

Average score: 51.16%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment