Datasets:
gsgoncalves
commited on
Commit
•
9f60b65
1
Parent(s):
fd0b7fe
Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,26 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
num_examples: 84831541
|
10 |
-
download_size: 28799016792
|
11 |
-
dataset_size: 47173330687
|
12 |
---
|
13 |
-
# Dataset Card for
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: unknown
|
3 |
+
task_categories:
|
4 |
+
- fill-mask
|
5 |
+
- text-generation
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pretty_name: RoBERTa Pretrain Dataset
|
|
|
|
|
|
|
9 |
---
|
10 |
+
# Dataset Card for RoBERTa Pretrain
|
11 |
|
12 |
+
### Dataset Summary
|
13 |
+
|
14 |
+
This is the concatenation of the datasets used to Pretrain RoBERTa.
|
15 |
+
The dataset is not shuffled and contains raw text. It is packaged for convenicence.
|
16 |
+
|
17 |
+
Essentially is the same as:
|
18 |
+
```
|
19 |
+
from datasets import load_dataset, concatenate_datasets
|
20 |
+
bookcorpus = load_dataset("bookcorpus", split="train")
|
21 |
+
openweb = load_dataset("openwebtext", split="train")
|
22 |
+
cc_news = load_dataset("cc_news", split="train")
|
23 |
+
cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"])
|
24 |
+
cc_stories = load_dataset("spacemanidol/cc-stories", split="train")
|
25 |
+
return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']])
|
26 |
+
```
|