Update README.md
Browse files
README.md
CHANGED
@@ -65,15 +65,15 @@ generated_text = generate_text(prompt)
|
|
65 |
print(generated_text)
|
66 |
```
|
67 |
|
68 |
-
## Training Details
|
69 |
|
70 |
-
|
|
|
71 |
### Training Data
|
72 |
This link provides the Evol-Instruct question-and-answer dataset
|
73 |
https://raw.githubusercontent.com/M-e-e-n-a/Synthetic-Dataset-Creation/main/combined_dataset.json
|
74 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
75 |
|
76 |
-
|
77 |
|
78 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
79 |
|
@@ -197,7 +197,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
197 |
[More Information Needed]--->
|
198 |
## Results
|
199 |
|
200 |
-
|
201 |
|
202 |
<table>
|
203 |
<thead>
|
@@ -211,60 +211,61 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
211 |
</thead>
|
212 |
<tbody>
|
213 |
<tr>
|
214 |
-
<td>ROUGE-1</td>
|
215 |
-
<td>0.3117</td>
|
216 |
-
<td>0.3188</td>
|
217 |
-
<td>0.2637</td>
|
218 |
-
<td>0.3281</td>
|
219 |
</tr>
|
220 |
<tr>
|
221 |
-
<td>ROUGE-2</td>
|
222 |
-
<td>0.1867</td>
|
223 |
-
<td>0.1176</td>
|
224 |
-
<td>0.1573</td>
|
225 |
-
<td>0.1270</td>
|
226 |
</tr>
|
227 |
<tr>
|
228 |
-
<td>ROUGE-L</td>
|
229 |
-
<td>0.1818</td>
|
230 |
-
<td>0.1449</td>
|
231 |
-
<td>0.2637</td>
|
232 |
-
<td>0.2031</td>
|
233 |
</tr>
|
234 |
<tr>
|
235 |
-
<td>ROUGE-LSUM</td>
|
236 |
-
<td>0.1818</td>
|
237 |
-
<td>0.1449</td>
|
238 |
-
<td>0.2637</td>
|
239 |
-
<td>0.2031</td>
|
240 |
</tr>
|
241 |
<tr>
|
242 |
-
<td>METEOR</td>
|
243 |
-
<td>0.0693</td>
|
244 |
-
<td>0.3088</td>
|
245 |
-
<td>0.4377</td>
|
246 |
-
<td>0.3662</td>
|
247 |
</tr>
|
248 |
<tr>
|
249 |
-
<td>BERTScore</td>
|
250 |
-
<td>0.8262</td>
|
251 |
-
<td>0.8538</td>
|
252 |
-
<td>0.9070</td>
|
253 |
-
<td>0.8782</td>
|
254 |
</tr>
|
255 |
<tr>
|
256 |
-
<td>G-Eval</td>
|
257 |
-
<td>0.35</td>
|
258 |
-
<td>0.42</td>
|
259 |
-
<td>0.78</td>
|
260 |
-
<td>0.87</td>
|
261 |
</tr>
|
262 |
<tr>
|
263 |
-
<td>QAG Score</td>
|
264 |
-
<td>0.1046</td>
|
265 |
-
<td>0.2061</td>
|
266 |
-
<td>0.3762</td>
|
267 |
-
<td>0.2609</td>
|
268 |
</tr>
|
269 |
</tbody>
|
270 |
</table>
|
|
|
|
65 |
print(generated_text)
|
66 |
```
|
67 |
|
|
|
68 |
|
69 |
+
|
70 |
+
<!--## Training Details -->
|
71 |
### Training Data
|
72 |
This link provides the Evol-Instruct question-and-answer dataset
|
73 |
https://raw.githubusercontent.com/M-e-e-n-a/Synthetic-Dataset-Creation/main/combined_dataset.json
|
74 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
75 |
|
76 |
+
<!--### Training Procedure -->
|
77 |
|
78 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
79 |
|
|
|
197 |
[More Information Needed]--->
|
198 |
## Results
|
199 |
|
200 |
+
## Evaluation Metrics
|
201 |
|
202 |
<table>
|
203 |
<thead>
|
|
|
211 |
</thead>
|
212 |
<tbody>
|
213 |
<tr>
|
214 |
+
<td align="center">ROUGE-1</td>
|
215 |
+
<td align="center">0.3117</td>
|
216 |
+
<td align="center">0.3188</td>
|
217 |
+
<td align="center">0.2637</td>
|
218 |
+
<td align="center">0.3281</td>
|
219 |
</tr>
|
220 |
<tr>
|
221 |
+
<td align="center">ROUGE-2</td>
|
222 |
+
<td align="center">0.1867</td>
|
223 |
+
<td align="center">0.1176</td>
|
224 |
+
<td align="center">0.1573</td>
|
225 |
+
<td align="center">0.1270</td>
|
226 |
</tr>
|
227 |
<tr>
|
228 |
+
<td align="center">ROUGE-L</td>
|
229 |
+
<td align="center">0.1818</td>
|
230 |
+
<td align="center">0.1449</td>
|
231 |
+
<td align="center">0.2637</td>
|
232 |
+
<td align="center">0.2031</td>
|
233 |
</tr>
|
234 |
<tr>
|
235 |
+
<td align="center">ROUGE-LSUM</td>
|
236 |
+
<td align="center">0.1818</td>
|
237 |
+
<td align="center">0.1449</td>
|
238 |
+
<td align="center">0.2637</td>
|
239 |
+
<td align="center">0.2031</td>
|
240 |
</tr>
|
241 |
<tr>
|
242 |
+
<td align="center">METEOR</td>
|
243 |
+
<td align="center">0.0693</td>
|
244 |
+
<td align="center">0.3088</td>
|
245 |
+
<td align="center">0.4377</td>
|
246 |
+
<td align="center">0.3662</td>
|
247 |
</tr>
|
248 |
<tr>
|
249 |
+
<td align="center">BERTScore</td>
|
250 |
+
<td align="center">0.8262</td>
|
251 |
+
<td align="center">0.8538</td>
|
252 |
+
<td align="center">0.9070</td>
|
253 |
+
<td align="center">0.8782</td>
|
254 |
</tr>
|
255 |
<tr>
|
256 |
+
<td align="center">G-Eval</td>
|
257 |
+
<td align="center">0.35</td>
|
258 |
+
<td align="center">0.42</td>
|
259 |
+
<td align="center">0.78</td>
|
260 |
+
<td align="center">0.87</td>
|
261 |
</tr>
|
262 |
<tr>
|
263 |
+
<td align="center">QAG Score</td>
|
264 |
+
<td align="center">0.1046</td>
|
265 |
+
<td align="center">0.2061</td>
|
266 |
+
<td align="center">0.3762</td>
|
267 |
+
<td align="center">0.2609</td>
|
268 |
</tr>
|
269 |
</tbody>
|
270 |
</table>
|
271 |
+
|