Skip to content

Instantly share code, notes, and snippets.

@sethuiyer
Created January 11, 2024 08:28
Show Gist options
  • Save sethuiyer/08b4498ed13a6dead38ad3a6f12e349a to your computer and use it in GitHub Desktop.
Save sethuiyer/08b4498ed13a6dead38ad3a6f12e349a to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Chikuma_10.7B 42.41 73.41 56.69 43.5 54

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 22.44 ± 2.62
acc_norm 23.62 ± 2.67
agieval_logiqa_en 0 acc 37.48 ± 1.90
acc_norm 37.33 ± 1.90
agieval_lsat_ar 0 acc 24.35 ± 2.84
acc_norm 25.65 ± 2.89
agieval_lsat_lr 0 acc 44.12 ± 2.20
acc_norm 45.69 ± 2.21
agieval_lsat_rc 0 acc 61.34 ± 2.97
acc_norm 57.25 ± 3.02
agieval_sat_en 0 acc 74.27 ± 3.05
acc_norm 73.30 ± 3.09
agieval_sat_en_without_passage 0 acc 45.15 ± 3.48
acc_norm 43.69 ± 3.46
agieval_sat_math 0 acc 36.36 ± 3.25
acc_norm 32.73 ± 3.17

Average: 42.41%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 58.19 ± 1.44
acc_norm 59.47 ± 1.43
arc_easy 0 acc 84.09 ± 0.75
acc_norm 81.82 ± 0.79
boolq 1 acc 87.61 ± 0.58
hellaswag 0 acc 63.90 ± 0.48
acc_norm 82.27 ± 0.38
openbookqa 0 acc 34.20 ± 2.12
acc_norm 44.80 ± 2.23
piqa 0 acc 81.34 ± 0.91
acc_norm 83.03 ± 0.88
winogrande 0 acc 74.90 ± 1.22

Average: 73.41%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 39.66 ± 1.71
mc2 56.69 ± 1.56

Average: 56.69%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 59.47 ± 3.57
bigbench_date_understanding 0 multiple_choice_grade 62.87 ± 2.52
bigbench_disambiguation_qa 0 multiple_choice_grade 47.67 ± 3.12
bigbench_geometric_shapes 0 multiple_choice_grade 20.06 ± 2.12
exact_str_match 10.03 ± 1.59
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 31.60 ± 2.08
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.43 ± 1.60
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 49.67 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 39.40 ± 2.19
bigbench_navigate 0 multiple_choice_grade 50.20 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 67.80 ± 1.05
bigbench_ruin_names 0 multiple_choice_grade 41.07 ± 2.33
bigbench_salient_translation_error_detection 0 multiple_choice_grade 26.55 ± 1.40
bigbench_snarks 0 multiple_choice_grade 72.38 ± 3.33
bigbench_sports_understanding 0 multiple_choice_grade 68.97 ± 1.47
bigbench_temporal_sequences 0 multiple_choice_grade 33.50 ± 1.49
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 21.92 ± 1.17
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.74 ± 0.89
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 49.67 ± 2.89

Average: 43.5%

Average score: 54.0%

Elapsed time: 02:44:48

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment