Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created January 23, 2024 23:47
Show Gist options
  • Save mlabonne/b09acc668b5e03fc7af19cb01d9fd4af to your computer and use it in GitHub Desktop.
Save mlabonne/b09acc668b5e03fc7af19cb01d9fd4af to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
NeuralDarewin-7B 45.6 74.29 63.15 48.35 57.85

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 26.38 ± 2.77
acc_norm 25.59 ± 2.74
agieval_logiqa_en 0 acc 39.48 ± 1.92
acc_norm 39.17 ± 1.91
agieval_lsat_ar 0 acc 21.74 ± 2.73
acc_norm 21.74 ± 2.73
agieval_lsat_lr 0 acc 51.96 ± 2.21
acc_norm 52.35 ± 2.21
agieval_lsat_rc 0 acc 62.08 ± 2.96
acc_norm 62.45 ± 2.96
agieval_sat_en 0 acc 78.16 ± 2.89
acc_norm 78.16 ± 2.89
agieval_sat_en_without_passage 0 acc 49.03 ± 3.49
acc_norm 48.06 ± 3.49
agieval_sat_math 0 acc 37.73 ± 3.28
acc_norm 37.27 ± 3.27

Average: 45.6%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 60.24 ± 1.43
acc_norm 61.43 ± 1.42
arc_easy 0 acc 83.29 ± 0.77
acc_norm 78.62 ± 0.84
boolq 1 acc 87.77 ± 0.57
hellaswag 0 acc 66.86 ± 0.47
acc_norm 84.52 ± 0.36
openbookqa 0 acc 37.60 ± 2.17
acc_norm 46.60 ± 2.23
piqa 0 acc 82.43 ± 0.89
acc_norm 83.51 ± 0.87
winogrande 0 acc 77.58 ± 1.17

Average: 74.29%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 46.27 ± 1.75
mc2 63.15 ± 1.55

Average: 63.15%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 60.00 ± 3.56
bigbench_date_understanding 0 multiple_choice_grade 62.06 ± 2.53
bigbench_disambiguation_qa 0 multiple_choice_grade 37.98 ± 3.03
bigbench_geometric_shapes 0 multiple_choice_grade 22.28 ± 2.20
exact_str_match 5.29 ± 1.18
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 33.80 ± 2.12
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.29 ± 1.60
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 56.00 ± 2.87
bigbench_movie_recommendation 0 multiple_choice_grade 46.00 ± 2.23
bigbench_navigate 0 multiple_choice_grade 51.30 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 70.55 ± 1.02
bigbench_ruin_names 0 multiple_choice_grade 52.01 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 41.48 ± 1.56
bigbench_snarks 0 multiple_choice_grade 75.69 ± 3.20
bigbench_sports_understanding 0 multiple_choice_grade 74.54 ± 1.39
bigbench_temporal_sequences 0 multiple_choice_grade 65.90 ± 1.50
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 23.68 ± 1.20
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 17.77 ± 0.91
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 56.00 ± 2.87

Average: 48.35%

Average score: 57.85%

Elapsed time: 02:25:49

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment