Skip to content

Instantly share code, notes, and snippets.

@DHNishi
Created January 10, 2024 09:01
Show Gist options
  • Save DHNishi/315ba1abba27af930f5f546af3515735 to your computer and use it in GitHub Desktop.
Save DHNishi/315ba1abba27af930f5f546af3515735 to your computer and use it in GitHub Desktop.
Model AGIEval GPT4All TruthfulQA Bigbench Average
Silicon-Maid-7B 44.74 74.26 61.5 45.32 56.45

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 24.02 ± 2.69
acc_norm 23.23 ± 2.65
agieval_logiqa_en 0 acc 39.63 ± 1.92
acc_norm 39.32 ± 1.92
agieval_lsat_ar 0 acc 25.65 ± 2.89
acc_norm 24.78 ± 2.85
agieval_lsat_lr 0 acc 51.57 ± 2.22
acc_norm 51.76 ± 2.21
agieval_lsat_rc 0 acc 60.97 ± 2.98
acc_norm 60.22 ± 2.99
agieval_sat_en 0 acc 79.13 ± 2.84
acc_norm 78.64 ± 2.86
agieval_sat_en_without_passage 0 acc 43.20 ± 3.46
acc_norm 41.75 ± 3.44
agieval_sat_math 0 acc 38.64 ± 3.29
acc_norm 38.18 ± 3.28

Average: 44.74%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 59.81 ± 1.43
acc_norm 61.60 ± 1.42
arc_easy 0 acc 83.54 ± 0.76
acc_norm 80.13 ± 0.82
boolq 1 acc 87.95 ± 0.57
hellaswag 0 acc 66.73 ± 0.47
acc_norm 84.51 ± 0.36
openbookqa 0 acc 36.20 ± 2.15
acc_norm 47.20 ± 2.23
piqa 0 acc 82.05 ± 0.90
acc_norm 82.97 ± 0.88
winogrande 0 acc 75.45 ± 1.21

Average: 74.26%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 43.7 ± 1.74
mc2 61.5 ± 1.56

Average: 61.5%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 60.53 ± 3.56
bigbench_date_understanding 0 multiple_choice_grade 66.94 ± 2.45
bigbench_disambiguation_qa 0 multiple_choice_grade 36.82 ± 3.01
bigbench_geometric_shapes 0 multiple_choice_grade 20.33 ± 2.13
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 31.60 ± 2.08
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 20.86 ± 1.54
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 52.67 ± 2.89
bigbench_movie_recommendation 0 multiple_choice_grade 37.80 ± 2.17
bigbench_navigate 0 multiple_choice_grade 50.10 ± 1.58
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 70.10 ± 1.02
bigbench_ruin_names 0 multiple_choice_grade 53.79 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 30.46 ± 1.46
bigbench_snarks 0 multiple_choice_grade 76.24 ± 3.17
bigbench_sports_understanding 0 multiple_choice_grade 70.99 ± 1.45
bigbench_temporal_sequences 0 multiple_choice_grade 43.70 ± 1.57
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 23.20 ± 1.19
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 16.91 ± 0.90
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 52.67 ± 2.89

Average: 45.32%

Average score: 56.45% Elapsed time: 02:10:00

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment