Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -324,6 +324,19 @@ if __name__ == "__main__":
|
|
324 |
main()
|
325 |
```
|
326 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
327 |
## ⚠️ Important Update (March 10th, 2025)
|
328 |
|
329 |
We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`.
|
|
|
324 |
main()
|
325 |
```
|
326 |
|
327 |
+
## Update (March 31st, 2025)
|
328 |
+
|
329 |
+
To improve inference efficiency and streamline data collation, we’ve decided to drop a small number of exceptionally long samples from the dataset.
|
330 |
+
|
331 |
+
We’re using the `meta-llama/Llama-3.2-1B-instruct` tokenizer, and the filtering criteria are as follows:
|
332 |
+
- Github_easy: Samples longer than 1024 tokens — 5 out of 582 removed
|
333 |
+
- Github_medium: Samples longer than 2048 tokens — 7 out of 593 removed
|
334 |
+
- Github_hard: Samples longer than 8192 tokens — 4 out of 372 removed
|
335 |
+
- Other subsets are not touched
|
336 |
+
|
337 |
+
Since the number of discarded samples is minimal, this change is expected to have at most a 1% impact on results.
|
338 |
+
|
339 |
+
|
340 |
## ⚠️ Important Update (March 10th, 2025)
|
341 |
|
342 |
We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`.
|