Skip to content

Instantly share code, notes, and snippets.

@mlabonne
Created April 18, 2024 22:08
Show Gist options
  • Save mlabonne/616b6245137a9cfc4ea80e4c6e55d847 to your computer and use it in GitHub Desktop.
Save mlabonne/616b6245137a9cfc4ea80e4c6e55d847 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Meta-Llama-3-8B 31.1 69.95 43.91 36.7 45.42

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 21.65 ± 2.59
acc_norm 23.62 ± 2.67
agieval_logiqa_en 0 acc 27.96 ± 1.76
acc_norm 32.87 ± 1.84
agieval_lsat_ar 0 acc 18.70 ± 2.58
acc_norm 20.00 ± 2.64
agieval_lsat_lr 0 acc 37.25 ± 2.14
acc_norm 30.20 ± 2.03
agieval_lsat_rc 0 acc 44.61 ± 3.04
acc_norm 34.20 ± 2.90
agieval_sat_en 0 acc 59.22 ± 3.43
acc_norm 44.66 ± 3.47
agieval_sat_en_without_passage 0 acc 38.35 ± 3.40
acc_norm 29.13 ± 3.17
agieval_sat_math 0 acc 41.36 ± 3.33
acc_norm 34.09 ± 3.20

Average: 31.1%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 50.43 ± 1.46
acc_norm 53.24 ± 1.46
arc_easy 0 acc 80.09 ± 0.82
acc_norm 77.69 ± 0.85
boolq 1 acc 81.35 ± 0.68
hellaswag 0 acc 60.17 ± 0.49
acc_norm 79.13 ± 0.41
openbookqa 0 acc 34.80 ± 2.13
acc_norm 45.00 ± 2.23
piqa 0 acc 79.65 ± 0.94
acc_norm 80.74 ± 0.92
winogrande 0 acc 72.53 ± 1.25

Average: 69.95%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 26.93 ± 1.55
mc2 43.91 ± 1.39

Average: 43.91%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 55.79 ± 3.61
bigbench_date_understanding 0 multiple_choice_grade 69.65 ± 2.40
bigbench_disambiguation_qa 0 multiple_choice_grade 31.78 ± 2.90
bigbench_geometric_shapes 0 multiple_choice_grade 19.50 ± 2.09
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 26.40 ± 1.97
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 20.14 ± 1.52
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 42.67 ± 2.86
bigbench_movie_recommendation 0 multiple_choice_grade 30.20 ± 2.06
bigbench_navigate 0 multiple_choice_grade 50.50 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 56.45 ± 1.11
bigbench_ruin_names 0 multiple_choice_grade 28.35 ± 2.13
bigbench_salient_translation_error_detection 0 multiple_choice_grade 25.05 ± 1.37
bigbench_snarks 0 multiple_choice_grade 46.41 ± 3.72
bigbench_sports_understanding 0 multiple_choice_grade 51.01 ± 1.59
bigbench_temporal_sequences 0 multiple_choice_grade 25.70 ± 1.38
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 22.00 ± 1.17
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.34 ± 0.88
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 42.67 ± 2.86

Average: 36.7%

Average score: 45.42%

Elapsed time: 04:03:04

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment