File size: 852 Bytes
fd0b7fe
9f60b65
 
 
 
 
 
 
5b62716
 
fd0b7fe
9f60b65
fd0b7fe
9f60b65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: unknown
task_categories:
- fill-mask
- text-generation
language:
- en
pretty_name: RoBERTa Pretrain Dataset
size_categories:
- 10M<n<100M
---
# Dataset Card for RoBERTa Pretrain

### Dataset Summary

This is the concatenation of the datasets used to Pretrain RoBERTa.
The dataset is not shuffled and contains raw text. It is packaged for convenicence.

Essentially is the same as:
```
from datasets import load_dataset, concatenate_datasets
bookcorpus = load_dataset("bookcorpus", split="train")
openweb = load_dataset("openwebtext", split="train")
cc_news = load_dataset("cc_news", split="train")
cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"])
cc_stories = load_dataset("spacemanidol/cc-stories", split="train")
return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']])
```