Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created January 10, 2024 13:28
Show Gist options
  • Save mlabonne/3496feedcb6dffac3799f1a019e609bf to your computer and use it in GitHub Desktop.
Save mlabonne/3496feedcb6dffac3799f1a019e609bf to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
OpenHermes-2.5-neural-chat-v3-3-Slerp 43.5 74.88 63.22 47.5 57.27

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 23.23 ± 2.65
acc_norm 22.44 ± 2.62
agieval_logiqa_en 0 acc 38.86 ± 1.91
acc_norm 39.32 ± 1.92
agieval_lsat_ar 0 acc 21.74 ± 2.73
acc_norm 21.30 ± 2.71
agieval_lsat_lr 0 acc 50.00 ± 2.22
acc_norm 51.37 ± 2.22
agieval_lsat_rc 0 acc 60.22 ± 2.99
acc_norm 59.48 ± 3.00
agieval_sat_en 0 acc 75.73 ± 2.99
acc_norm 75.73 ± 2.99
agieval_sat_en_without_passage 0 acc 45.63 ± 3.48
acc_norm 45.15 ± 3.48
agieval_sat_math 0 acc 36.82 ± 3.26
acc_norm 33.18 ± 3.18

Average: 43.5%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 59.47 ± 1.43
acc_norm 61.60 ± 1.42
arc_easy 0 acc 85.10 ± 0.73
acc_norm 81.40 ± 0.80
boolq 1 acc 87.92 ± 0.57
hellaswag 0 acc 66.03 ± 0.47
acc_norm 84.84 ± 0.36
openbookqa 0 acc 36.60 ± 2.16
acc_norm 47.40 ± 2.24
piqa 0 acc 81.94 ± 0.90
acc_norm 84.28 ± 0.85
winogrande 0 acc 76.72 ± 1.19

Average: 74.88%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 46.27 ± 1.75
mc2 63.22 ± 1.50

Average: 63.22%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 60.00 ± 3.56
bigbench_date_understanding 0 multiple_choice_grade 64.50 ± 2.49
bigbench_disambiguation_qa 0 multiple_choice_grade 36.82 ± 3.01
bigbench_geometric_shapes 0 multiple_choice_grade 23.40 ± 2.24
exact_str_match 1.95 ± 0.73
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 32.80 ± 2.10
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 24.00 ± 1.62
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 54.67 ± 2.88
bigbench_movie_recommendation 0 multiple_choice_grade 42.80 ± 2.21
bigbench_navigate 0 multiple_choice_grade 52.80 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 68.85 ± 1.04
bigbench_ruin_names 0 multiple_choice_grade 51.12 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 39.08 ± 1.55
bigbench_snarks 0 multiple_choice_grade 76.24 ± 3.17
bigbench_sports_understanding 0 multiple_choice_grade 74.85 ± 1.38
bigbench_temporal_sequences 0 multiple_choice_grade 58.80 ± 1.56
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 21.76 ± 1.17
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 17.89 ± 0.92
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 54.67 ± 2.88

Average: 47.5%

Average score: 57.27% Elapsed time: 02:11:06

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment