Skip to content

Instantly share code, notes, and snippets.

@gblazex
Created January 10, 2024 03:40
Show Gist options
  • Save gblazex/0d4c37411dd9b25c272a61c2a53ac975 to your computer and use it in GitHub Desktop.
Save gblazex/0d4c37411dd9b25c272a61c2a53ac975 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
MistralTrix-v1 44.98 76.62 71.44 47.17 60.05

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 25.59 ± 2.74
acc_norm 24.80 ± 2.72
agieval_logiqa_en 0 acc 37.48 ± 1.90
acc_norm 38.56 ± 1.91
agieval_lsat_ar 0 acc 25.22 ± 2.87
acc_norm 24.35 ± 2.84
agieval_lsat_lr 0 acc 50.20 ± 2.22
acc_norm 51.57 ± 2.22
agieval_lsat_rc 0 acc 65.43 ± 2.91
acc_norm 65.06 ± 2.91
agieval_sat_en 0 acc 78.16 ± 2.89
acc_norm 77.67 ± 2.91
agieval_sat_en_without_passage 0 acc 46.12 ± 3.48
acc_norm 45.15 ± 3.48
agieval_sat_math 0 acc 35.00 ± 3.22
acc_norm 32.73 ± 3.17

Average: 44.98%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 67.66 ± 1.37
acc_norm 68.26 ± 1.36
arc_easy 0 acc 87.37 ± 0.68
acc_norm 81.82 ± 0.79
boolq 1 acc 87.49 ± 0.58
hellaswag 0 acc 70.09 ± 0.46
acc_norm 86.38 ± 0.34
openbookqa 0 acc 38.60 ± 2.18
acc_norm 49.20 ± 2.24
piqa 0 acc 83.90 ± 0.86
acc_norm 84.98 ± 0.83
winogrande 0 acc 78.22 ± 1.16

Average: 76.62%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 57.41 ± 1.73
mc2 71.44 ± 1.50

Average: 71.44%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 58.42 ± 3.59
bigbench_date_understanding 0 multiple_choice_grade 62.06 ± 2.53
bigbench_disambiguation_qa 0 multiple_choice_grade 41.86 ± 3.08
bigbench_geometric_shapes 0 multiple_choice_grade 23.40 ± 2.24
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 32.40 ± 2.10
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.43 ± 1.60
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 57.00 ± 2.86
bigbench_movie_recommendation 0 multiple_choice_grade 53.00 ± 2.23
bigbench_navigate 0 multiple_choice_grade 55.80 ± 1.57
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 67.10 ± 1.05
bigbench_ruin_names 0 multiple_choice_grade 51.12 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 38.58 ± 1.54
bigbench_snarks 0 multiple_choice_grade 67.40 ± 3.49
bigbench_sports_understanding 0 multiple_choice_grade 73.43 ± 1.41
bigbench_temporal_sequences 0 multiple_choice_grade 47.30 ± 1.58
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 22.64 ± 1.18
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 17.14 ± 0.90
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 57.00 ± 2.86

Average: 47.17%

Average score: 60.05% Elapsed time: 02:55:01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment