Haleshot commited on
Commit
d8ca261
·
verified ·
1 Parent(s): 6c32edf

Add `Evaluation Results` Table

Browse files

Added it from my Gist - https://gist.github.com/Haleshot/57635a02b68836ec2305c49e8f6c1dd1

Files changed (1) hide show
  1. README.md +125 -1
README.md CHANGED
@@ -9,6 +9,45 @@ tags:
9
  - AI-MO/NuminaMath-7B-TIR
10
  - deepseek-ai/DeepSeek-Prover-V1.5-RL
11
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Mathmate-7B-DELLA
@@ -59,4 +98,89 @@ pipeline = transformers.pipeline(
59
 
60
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
61
  print(outputs[0]["generated_text"])
62
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - AI-MO/NuminaMath-7B-TIR
10
  - deepseek-ai/DeepSeek-Prover-V1.5-RL
11
  license: apache-2.0
12
+ model-index:
13
+ - name: Mathmate-7B-DELLA
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ dataset:
18
+ name: AGIEval
19
+ type: AGIEval
20
+ metrics:
21
+ - name: AGIEval
22
+ type: AGIEval
23
+ value: 21.95
24
+ - task:
25
+ type: text-generation
26
+ dataset:
27
+ name: GPT4All
28
+ type: GPT4All
29
+ metrics:
30
+ - name: GPT4All
31
+ type: GPT4All
32
+ value: 36.5
33
+ - task:
34
+ type: text-generation
35
+ dataset:
36
+ name: TruthfulQA
37
+ type: TruthfulQA
38
+ metrics:
39
+ - name: TruthfulQA
40
+ type: TruthfulQA
41
+ value: 48.08
42
+ - task:
43
+ type: text-generation
44
+ dataset:
45
+ name: Bigbench
46
+ type: Bigbench
47
+ metrics:
48
+ - name: Bigbench
49
+ type: Bigbench
50
+ value: 28.89
51
  ---
52
 
53
  # Mathmate-7B-DELLA
 
98
 
99
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
100
  print(outputs[0]["generated_text"])
101
+ ```
102
+
103
+ ## 📊 Evaluation Results
104
+
105
+ Evaluation results using LLMAutoeval:
106
+
107
+ | Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
108
+ |-------|---------|---------|------------|----------|---------|
109
+ | [Mathmate-7B-DELLA](https://huggingface.co/Haleshot/Mathmate-7B-DELLA) | 21.95 | 36.5 | 48.08 | 28.89 | 33.86 |
110
+
111
+ ### AGIEval
112
+ | Task | Version | Metric | Value | Stderr |
113
+ |------|---------|--------|-------|--------|
114
+ | agieval_aqua_rat | 0 | acc | 21.26 | 2.57 |
115
+ | | | acc_norm | 22.05 | 2.61 |
116
+ | agieval_logiqa_en | 0 | acc | 20.89 | 1.59 |
117
+ | | | acc_norm | 25.65 | 1.71 |
118
+ | agieval_lsat_ar | 0 | acc | 21.74 | 2.73 |
119
+ | | | acc_norm | 19.57 | 2.62 |
120
+ | agieval_lsat_lr | 0 | acc | 13.92 | 1.53 |
121
+ | | | acc_norm | 18.82 | 1.73 |
122
+ | agieval_lsat_rc | 0 | acc | 21.19 | 2.50 |
123
+ | | | acc_norm | 18.96 | 2.39 |
124
+ | agieval_sat_en | 0 | acc | 24.76 | 3.01 |
125
+ | | | acc_norm | 21.36 | 2.86 |
126
+ | agieval_sat_en_without_passage | 0 | acc | 27.18 | 3.11 |
127
+ | | | acc_norm | 23.30 | 2.95 |
128
+ | agieval_sat_math | 0 | acc | 25.45 | 2.94 |
129
+ | | | acc_norm | 25.91 | 2.96 |
130
+
131
+ Average: 21.95%
132
+
133
+ ### GPT4All
134
+ | Task | Version | Metric | Value | Stderr |
135
+ |------|---------|--------|-------|--------|
136
+ | arc_challenge | 0 | acc | 22.61 | 1.22 |
137
+ | | | acc_norm | 25.68 | 1.28 |
138
+ | arc_easy | 0 | acc | 25.25 | 0.89 |
139
+ | | | acc_norm | 25.08 | 0.89 |
140
+ | boolq | 1 | acc | 52.02 | 0.87 |
141
+ | hellaswag | 0 | acc | 25.77 | 0.44 |
142
+ | | | acc_norm | 26.09 | 0.44 |
143
+ | openbookqa | 0 | acc | 18.40 | 1.73 |
144
+ | | | acc_norm | 28.80 | 2.03 |
145
+ | piqa | 0 | acc | 51.31 | 1.17 |
146
+ | | | acc_norm | 50.11 | 1.17 |
147
+ | winogrande | 0 | acc | 47.75 | 1.40 |
148
+
149
+ Average: 36.5%
150
+
151
+ ### TruthfulQA
152
+ | Task | Version | Metric | Value | Stderr |
153
+ |------|---------|--------|-------|--------|
154
+ | truthfulqa_mc | 1 | mc1 | 22.77 | 1.47 |
155
+ | | | mc2 | 48.08 | 1.70 |
156
+
157
+ Average: 48.08%
158
+
159
+ ### Bigbench
160
+ | Task | Version | Metric | Value | Stderr |
161
+ |------|---------|--------|-------|--------|
162
+ | bigbench_causal_judgement | 0 | multiple_choice_grade | 49.47 | 3.64 |
163
+ | bigbench_date_understanding | 0 | multiple_choice_grade | 13.55 | 1.78 |
164
+ | bigbench_disambiguation_qa | 0 | multiple_choice_grade | 30.23 | 2.86 |
165
+ | bigbench_geometric_shapes | 0 | multiple_choice_grade | 10.03 | 1.59 |
166
+ | | | exact_str_match | 0.00 | 0.00 |
167
+ | bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 19.40 | 1.77 |
168
+ | bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 14.00 | 1.31 |
169
+ | bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 36.67 | 2.79 |
170
+ | bigbench_movie_recommendation | 0 | multiple_choice_grade | 23.60 | 1.90 |
171
+ | bigbench_navigate | 0 | multiple_choice_grade | 47.10 | 1.58 |
172
+ | bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 13.05 | 0.75 |
173
+ | bigbench_ruin_names | 0 | multiple_choice_grade | 53.79 | 2.36 |
174
+ | bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 15.63 | 1.15 |
175
+ | bigbench_snarks | 0 | multiple_choice_grade | 46.96 | 3.72 |
176
+ | bigbench_sports_understanding | 0 | multiple_choice_grade | 49.70 | 1.59 |
177
+ | bigbench_temporal_sequences | 0 | multiple_choice_grade | 25.80 | 1.38 |
178
+ | bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 19.76 | 1.13 |
179
+ | bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 14.69 | 0.85 |
180
+ | bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 36.67 | 2.79 |
181
+
182
+ Average: 28.89%
183
+
184
+ Average score: 33.86%
185
+
186
+ Elapsed time: 03:52:09