Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created April 27, 2024 15:17
Show Gist options
  • Save mlabonne/73770c428e28efa0ba6b348bbd9e7f93 to your computer and use it in GitHub Desktop.
Save mlabonne/73770c428e28efa0ba6b348bbd9e7f93 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Einstein-v6.1-Llama3-8B 36.33 73.08 55.07 41.11 51.4

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 22.83 ± 2.64
acc_norm 22.44 ± 2.62
agieval_logiqa_en 0 acc 34.25 ± 1.86
acc_norm 35.64 ± 1.88
agieval_lsat_ar 0 acc 19.57 ± 2.62
acc_norm 19.57 ± 2.62
agieval_lsat_lr 0 acc 42.75 ± 2.19
acc_norm 40.98 ± 2.18
agieval_lsat_rc 0 acc 53.90 ± 3.04
acc_norm 47.21 ± 3.05
agieval_sat_en 0 acc 69.90 ± 3.20
acc_norm 63.11 ± 3.37
agieval_sat_en_without_passage 0 acc 41.26 ± 3.44
acc_norm 33.98 ± 3.31
agieval_sat_math 0 acc 33.18 ± 3.18
acc_norm 27.73 ± 3.02

Average: 36.33%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 56.48 ± 1.45
acc_norm 58.53 ± 1.44
arc_easy 0 acc 83.50 ± 0.76
acc_norm 82.91 ± 0.77
boolq 1 acc 84.86 ± 0.63
hellaswag 0 acc 61.80 ± 0.48
acc_norm 80.55 ± 0.39
openbookqa 0 acc 37.40 ± 2.17
acc_norm 46.00 ± 2.23
piqa 0 acc 81.39 ± 0.91
acc_norm 82.37 ± 0.89
winogrande 0 acc 76.32 ± 1.19

Average: 73.08%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 39.53 ± 1.71
mc2 55.07 ± 1.51

Average: 55.07%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 55.26 ± 3.62
bigbench_date_understanding 0 multiple_choice_grade 66.40 ± 2.46
bigbench_disambiguation_qa 0 multiple_choice_grade 31.40 ± 2.89
bigbench_geometric_shapes 0 multiple_choice_grade 23.40 ± 2.24
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 29.60 ± 2.04
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 20.43 ± 1.52
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 51.67 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 36.60 ± 2.16
bigbench_navigate 0 multiple_choice_grade 50.00 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 62.90 ± 1.08
bigbench_ruin_names 0 multiple_choice_grade 45.54 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 24.75 ± 1.37
bigbench_snarks 0 multiple_choice_grade 62.98 ± 3.60
bigbench_sports_understanding 0 multiple_choice_grade 50.91 ± 1.59
bigbench_temporal_sequences 0 multiple_choice_grade 37.80 ± 1.53
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 22.64 ± 1.18
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.06 ± 0.88
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 51.67 ± 2.89

Average: 41.11%

Average score: 51.4%

Elapsed time: 03:47:35

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment