Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save burtenshaw/e06a307516469c693fd4191859a05e92 to your computer and use it in GitHub Desktop.
Save burtenshaw/e06a307516469c693fd4191859a05e92 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
OpenHermes-2.5-Mistral-7B-top-SPIN-iter0 42.47 72.8 52.58 40.46 52.08

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 24.02 ± 2.69
acc_norm 24.02 ± 2.69
agieval_logiqa_en 0 acc 38.10 ± 1.90
acc_norm 39.02 ± 1.91
agieval_lsat_ar 0 acc 23.48 ± 2.80
acc_norm 22.17 ± 2.75
agieval_lsat_lr 0 acc 51.37 ± 2.22
acc_norm 50.78 ± 2.22
agieval_lsat_rc 0 acc 56.51 ± 3.03
acc_norm 54.65 ± 3.04
agieval_sat_en 0 acc 72.33 ± 3.12
acc_norm 73.30 ± 3.09
agieval_sat_en_without_passage 0 acc 44.17 ± 3.47
acc_norm 41.26 ± 3.44
agieval_sat_math 0 acc 37.73 ± 3.28
acc_norm 34.55 ± 3.21

Average: 42.47%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 55.72 ± 1.45
acc_norm 59.04 ± 1.44
arc_easy 0 acc 83.33 ± 0.76
acc_norm 80.93 ± 0.81
boolq 1 acc 86.51 ± 0.60
hellaswag 0 acc 61.84 ± 0.48
acc_norm 81.03 ± 0.39
openbookqa 0 acc 34.00 ± 2.12
acc_norm 44.80 ± 2.23
piqa 0 acc 81.39 ± 0.91
acc_norm 82.75 ± 0.88
winogrande 0 acc 74.51 ± 1.22

Average: 72.8%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 35.01 ± 1.67
mc2 52.58 ± 1.49

Average: 52.58%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 51.58 ± 3.64
bigbench_date_understanding 0 multiple_choice_grade 67.21 ± 2.45
bigbench_disambiguation_qa 0 multiple_choice_grade 34.50 ± 2.97
bigbench_geometric_shapes 0 multiple_choice_grade 22.01 ± 2.19
exact_str_match 15.60 ± 1.92
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 27.60 ± 2.00
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 20.57 ± 1.53
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 47.33 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 35.60 ± 2.14
bigbench_navigate 0 multiple_choice_grade 50.00 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 65.50 ± 1.06
bigbench_ruin_names 0 multiple_choice_grade 43.53 ± 2.35
bigbench_salient_translation_error_detection 0 multiple_choice_grade 17.84 ± 1.21
bigbench_snarks 0 multiple_choice_grade 68.51 ± 3.46
bigbench_sports_understanding 0 multiple_choice_grade 63.69 ± 1.53
bigbench_temporal_sequences 0 multiple_choice_grade 27.90 ± 1.42
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 20.96 ± 1.15
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.63 ± 0.89
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 47.33 ± 2.89

Average: 40.46%

Average score: 52.08%

Elapsed time: 11:34:59

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment