Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
meliksahturker commited on
Commit
eeda0b9
·
verified ·
1 Parent(s): 8032b36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -24,8 +24,9 @@ language:
24
  ---
25
 
26
  # Dataset Card for Dataset Name
27
- Vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
- This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [Turna](https://arxiv.org/abs/2401.14373). Cleaning procedures of this dataset is explained at appendix A of the [VBart Paper](https://arxiv.org/abs/2401.14373)
 
29
 
30
  ## Dataset Details
31
 
@@ -37,7 +38,7 @@ This dataset originally created for training [VBART](https://arxiv.org/abs/2403.
37
 
38
  ## Uses
39
 
40
- VNGRS-web-corpus is mainly inteded to pretrain language models and word represantations.
41
 
42
  ## Dataset Structure
43
 
 
24
  ---
25
 
26
  # Dataset Card for Dataset Name
27
+ vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
+ This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373). The cleaning procedures of this dataset are explained at Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373)
29
+ It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
30
 
31
  ## Dataset Details
32
 
 
38
 
39
  ## Uses
40
 
41
+ vngrs-web-corpus is mainly inteded to pretrain language models and word represantations.
42
 
43
  ## Dataset Structure
44