Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
Saibo commited on
Commit
f6fcc2a
·
1 Parent(s): be7453e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,7 +32,7 @@ Deduplication of training data has showed various advantages, including:
32
  cf [Deduplicating Training Data Makes Language Models Better](https://arxiv.org/abs/2107.06499)
33
 
34
 
35
- ## Deduplication script:
36
  ```python
37
  import pandas as pd
38
  from datasets import load_dataset
@@ -46,7 +46,7 @@ df_filtered = df["text"].drop_duplicates()
46
  df_filtered.to_csv("bookcorpus_filtered.csv","index"=False,"header"=False)
47
  new_dataset = load_dataset("text",data_files={"train":"bookcorpus_filtered.csv"})
48
  ```
49
-
50
  More sophicated deduplication algorithms can be applied to improve the performance, such as https://github.com/google-research/deduplicate-text-datasets
51
 
52
  ## Reference
 
32
  cf [Deduplicating Training Data Makes Language Models Better](https://arxiv.org/abs/2107.06499)
33
 
34
 
35
+ ## Deduplication script
36
  ```python
37
  import pandas as pd
38
  from datasets import load_dataset
 
46
  df_filtered.to_csv("bookcorpus_filtered.csv","index"=False,"header"=False)
47
  new_dataset = load_dataset("text",data_files={"train":"bookcorpus_filtered.csv"})
48
  ```
49
+ The running time is short, less than several minutes.
50
  More sophicated deduplication algorithms can be applied to improve the performance, such as https://github.com/google-research/deduplicate-text-datasets
51
 
52
  ## Reference