|
--- |
|
license: unknown |
|
task_categories: |
|
- fill-mask |
|
- text-generation |
|
language: |
|
- en |
|
pretty_name: RoBERTa Pretrain Dataset |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
# Dataset Card for RoBERTa Pretrain |
|
|
|
### Dataset Summary |
|
|
|
This is the concatenation of the datasets used to Pretrain RoBERTa. |
|
The dataset is not shuffled and contains raw text. It is packaged for convenicence. |
|
|
|
Essentially is the same as: |
|
``` |
|
from datasets import load_dataset, concatenate_datasets |
|
bookcorpus = load_dataset("bookcorpus", split="train") |
|
openweb = load_dataset("openwebtext", split="train") |
|
cc_news = load_dataset("cc_news", split="train") |
|
cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"]) |
|
cc_stories = load_dataset("spacemanidol/cc-stories", split="train") |
|
return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']]) |
|
``` |