macadeliccc commited on
Commit
0e496e2
·
verified ·
1 Parent(s): 46e8d4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -36,9 +36,95 @@ What's the capital of France?<|im_end|>
36
  <|im_start|>assistant
37
  Paris.
38
  ```
 
39
 
40
  [GGUF](https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes-GGUF)
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  - **Developed by:** macadeliccc
43
  - **License:** apache-2.0
44
  - **Finetuned from model :** alpindale/Mistral-7B-v0.2
 
36
  <|im_start|>assistant
37
  Paris.
38
  ```
39
+ ## Quantizations
40
 
41
  [GGUF](https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes-GGUF)
42
 
43
+ ### Evaluations
44
+
45
+ Thanks to Maxime Labonne for the evalution:
46
+
47
+ | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
48
+ |-------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
49
+ |[Mistral-7B-v0.2-OpenHermes](https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes)| 35.57| 67.15| 42.06| 36.27| 45.26|
50
+
51
+ ### AGIEval
52
+ | Task |Version| Metric |Value| |Stderr|
53
+ |------------------------------|------:|--------|----:|---|-----:|
54
+ |agieval_aqua_rat | 0|acc |24.02|± | 2.69|
55
+ | | |acc_norm|21.65|± | 2.59|
56
+ |agieval_logiqa_en | 0|acc |28.11|± | 1.76|
57
+ | | |acc_norm|34.56|± | 1.87|
58
+ |agieval_lsat_ar | 0|acc |27.83|± | 2.96|
59
+ | | |acc_norm|23.48|± | 2.80|
60
+ |agieval_lsat_lr | 0|acc |33.73|± | 2.10|
61
+ | | |acc_norm|33.14|± | 2.09|
62
+ |agieval_lsat_rc | 0|acc |48.70|± | 3.05|
63
+ | | |acc_norm|39.78|± | 2.99|
64
+ |agieval_sat_en | 0|acc |67.48|± | 3.27|
65
+ | | |acc_norm|64.56|± | 3.34|
66
+ |agieval_sat_en_without_passage| 0|acc |38.83|± | 3.40|
67
+ | | |acc_norm|37.38|± | 3.38|
68
+ |agieval_sat_math | 0|acc |32.27|± | 3.16|
69
+ | | |acc_norm|30.00|± | 3.10|
70
+
71
+ Average: 35.57%
72
+
73
+ ### GPT4All
74
+ | Task |Version| Metric |Value| |Stderr|
75
+ |-------------|------:|--------|----:|---|-----:|
76
+ |arc_challenge| 0|acc |45.05|± | 1.45|
77
+ | | |acc_norm|48.46|± | 1.46|
78
+ |arc_easy | 0|acc |77.27|± | 0.86|
79
+ | | |acc_norm|73.78|± | 0.90|
80
+ |boolq | 1|acc |68.62|± | 0.81|
81
+ |hellaswag | 0|acc |59.63|± | 0.49|
82
+ | | |acc_norm|79.66|± | 0.40|
83
+ |openbookqa | 0|acc |31.40|± | 2.08|
84
+ | | |acc_norm|43.40|± | 2.22|
85
+ |piqa | 0|acc |80.25|± | 0.93|
86
+ | | |acc_norm|82.05|± | 0.90|
87
+ |winogrande | 0|acc |74.11|± | 1.23|
88
+
89
+ Average: 67.15%
90
+
91
+ ### TruthfulQA
92
+ | Task |Version|Metric|Value| |Stderr|
93
+ |-------------|------:|------|----:|---|-----:|
94
+ |truthfulqa_mc| 1|mc1 |27.54|± | 1.56|
95
+ | | |mc2 |42.06|± | 1.44|
96
+
97
+ Average: 42.06%
98
+
99
+ ### Bigbench
100
+ | Task |Version| Metric |Value| |Stderr|
101
+ |------------------------------------------------|------:|---------------------|----:|---|-----:|
102
+ |bigbench_causal_judgement | 0|multiple_choice_grade|56.32|± | 3.61|
103
+ |bigbench_date_understanding | 0|multiple_choice_grade|66.40|± | 2.46|
104
+ |bigbench_disambiguation_qa | 0|multiple_choice_grade|45.74|± | 3.11|
105
+ |bigbench_geometric_shapes | 0|multiple_choice_grade|10.58|± | 1.63|
106
+ | | |exact_str_match | 0.00|± | 0.00|
107
+ |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|25.00|± | 1.94|
108
+ |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|17.71|± | 1.44|
109
+ |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|37.33|± | 2.80|
110
+ |bigbench_movie_recommendation | 0|multiple_choice_grade|29.40|± | 2.04|
111
+ |bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
112
+ |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|42.50|± | 1.11|
113
+ |bigbench_ruin_names | 0|multiple_choice_grade|39.06|± | 2.31|
114
+ |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|12.93|± | 1.06|
115
+ |bigbench_snarks | 0|multiple_choice_grade|69.06|± | 3.45|
116
+ |bigbench_sports_understanding | 0|multiple_choice_grade|49.80|± | 1.59|
117
+ |bigbench_temporal_sequences | 0|multiple_choice_grade|26.50|± | 1.40|
118
+ |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.20|± | 1.16|
119
+ |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|16.06|± | 0.88|
120
+ |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|37.33|± | 2.80|
121
+
122
+ Average: 36.27%
123
+
124
+ Average score: 45.26%
125
+
126
+ Elapsed time: 01:49:22
127
+
128
  - **Developed by:** macadeliccc
129
  - **License:** apache-2.0
130
  - **Finetuned from model :** alpindale/Mistral-7B-v0.2