Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
Saibo commited on
Commit
be7453e
·
1 Parent(s): fc178c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -12,4 +12,81 @@ dataset_info:
12
  ---
13
  # Dataset Card for "bookcorpus_deduplicated"
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
12
  ---
13
  # Dataset Card for "bookcorpus_deduplicated"
14
 
15
+ ## Dataset Summary
16
+ This is a deduplicated version of the original [Book Corpus dataset](https://huggingface.co/datasets/bookcorpus).
17
+ The Book Corpus (Zhu et al., 2015), which was used to train popular models such as BERT, has a substantial amount of exact-duplicate documents according to [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241)
18
+ [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241) find that thousands of books in BookCorpus are duplicated, with only 7,185 unique books out of 11,038 total.
19
+
20
+ Effect of deduplication
21
+ - Num of lines: 38832894 VS 74004228
22
+ - Dataset size: 2.91GB VS 4.63GB
23
+
24
+ The dataset has been shuffled, so the adjcent texts are no longer consecutive texts in a same book.
25
+
26
+ ## Why deduplicate?
27
+ Deduplication of training data has showed various advantages, including:
28
+ - require fewer training steps to achieve the same or better accuracy
29
+ - train models that emit memorized text ten times less frequently
30
+ - reduce carbon emission and energy consumption
31
+
32
+ cf [Deduplicating Training Data Makes Language Models Better](https://arxiv.org/abs/2107.06499)
33
+
34
+
35
+ ## Deduplication script:
36
+ ```python
37
+ import pandas as pd
38
+ from datasets import load_dataset
39
+
40
+ dataset = load_dataset("bookcorpus")["train"]["text"]
41
+ df = pd.Dataframe({"text":dataset})
42
+
43
+ # drop duplicates(exact match)
44
+ df_filtered = df["text"].drop_duplicates()
45
+
46
+ df_filtered.to_csv("bookcorpus_filtered.csv","index"=False,"header"=False)
47
+ new_dataset = load_dataset("text",data_files={"train":"bookcorpus_filtered.csv"})
48
+ ```
49
+
50
+ More sophicated deduplication algorithms can be applied to improve the performance, such as https://github.com/google-research/deduplicate-text-datasets
51
+
52
+ ## Reference
53
+ ```bib
54
+ @misc{https://doi.org/10.48550/arxiv.2105.05241,
55
+ doi = {10.48550/ARXIV.2105.05241},
56
+ url = {https://arxiv.org/abs/2105.05241},
57
+ author = {Bandy, Jack and Vincent, Nicholas},
58
+ keywords = {Computation and Language (cs.CL), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
59
+ title = {Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus},
60
+ publisher = {arXiv},
61
+ year = {2021},
62
+ copyright = {arXiv.org perpetual, non-exclusive license}
63
+ }
64
+ ```
65
+
66
+ ```bib
67
+ @misc{https://doi.org/10.48550/arxiv.2107.06499,
68
+ doi = {10.48550/ARXIV.2107.06499},
69
+ url = {https://arxiv.org/abs/2107.06499},
70
+ author = {Lee, Katherine and Ippolito, Daphne and Nystrom, Andrew and Zhang, Chiyuan and Eck, Douglas and Callison-Burch, Chris and Carlini, Nicholas},
71
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
72
+ title = {Deduplicating Training Data Makes Language Models Better},
73
+ publisher = {arXiv},
74
+ year = {2021},
75
+ copyright = {arXiv.org perpetual, non-exclusive license}
76
+ }
77
+ ```
78
+
79
+ ```bib
80
+ @misc{https://doi.org/10.48550/arxiv.2209.00099,
81
+ doi = {10.48550/ARXIV.2209.00099},
82
+ url = {https://arxiv.org/abs/2209.00099},
83
+ author = {Treviso, Marcos and Ji, Tianchu and Lee, Ji-Ung and van Aken, Betty and Cao, Qingqing and Ciosici, Manuel R. and Hassid, Michael and Heafield, Kenneth and Hooker, Sara and Martins, Pedro H. and Martins, André F. T. and Milder, Peter and Raffel, Colin and Simpson, Edwin and Slonim, Noam and Balasubramanian, Niranjan and Derczynski, Leon and Schwartz, Roy},
84
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
85
+ title = {Efficient Methods for Natural Language Processing: A Survey},
86
+ publisher = {arXiv},
87
+ year = {2022},
88
+ copyright = {arXiv.org perpetual, non-exclusive license}
89
+ }
90
+ ```
91
+
92
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)