Datasets:
meliksahturker
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,9 @@ language:
|
|
24 |
---
|
25 |
|
26 |
# Dataset Card for Dataset Name
|
27 |
-
|
28 |
-
This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [
|
|
|
29 |
|
30 |
## Dataset Details
|
31 |
|
@@ -37,7 +38,7 @@ This dataset originally created for training [VBART](https://arxiv.org/abs/2403.
|
|
37 |
|
38 |
## Uses
|
39 |
|
40 |
-
|
41 |
|
42 |
## Dataset Structure
|
43 |
|
|
|
24 |
---
|
25 |
|
26 |
# Dataset Card for Dataset Name
|
27 |
+
vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
|
28 |
+
This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373). The cleaning procedures of this dataset are explained at Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373)
|
29 |
+
It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
|
30 |
|
31 |
## Dataset Details
|
32 |
|
|
|
38 |
|
39 |
## Uses
|
40 |
|
41 |
+
vngrs-web-corpus is mainly inteded to pretrain language models and word represantations.
|
42 |
|
43 |
## Dataset Structure
|
44 |
|