gsgoncalves commited on
Commit
9f60b65
1 Parent(s): fd0b7fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -12
README.md CHANGED
@@ -1,15 +1,26 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- splits:
7
- - name: train
8
- num_bytes: 47173330687
9
- num_examples: 84831541
10
- download_size: 28799016792
11
- dataset_size: 47173330687
12
  ---
13
- # Dataset Card for "roberta_pretrain"
14
 
15
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: unknown
3
+ task_categories:
4
+ - fill-mask
5
+ - text-generation
6
+ language:
7
+ - en
8
+ pretty_name: RoBERTa Pretrain Dataset
 
 
 
9
  ---
10
+ # Dataset Card for RoBERTa Pretrain
11
 
12
+ ### Dataset Summary
13
+
14
+ This is the concatenation of the datasets used to Pretrain RoBERTa.
15
+ The dataset is not shuffled and contains raw text. It is packaged for convenicence.
16
+
17
+ Essentially is the same as:
18
+ ```
19
+ from datasets import load_dataset, concatenate_datasets
20
+ bookcorpus = load_dataset("bookcorpus", split="train")
21
+ openweb = load_dataset("openwebtext", split="train")
22
+ cc_news = load_dataset("cc_news", split="train")
23
+ cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"])
24
+ cc_stories = load_dataset("spacemanidol/cc-stories", split="train")
25
+ return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']])
26
+ ```