Datasets:

Languages:
English
ArXiv:
rskuzma commited on
Commit
417f7ee
1 Parent(s): e83b57a

Update README with links, citation

Browse files
Files changed (1) hide show
  1. README.md +37 -44
README.md CHANGED
@@ -3,14 +3,13 @@ task_categories:
3
  - text-generation
4
  language:
5
  - en
6
- pretty_name: SlimPajama 627B
7
- license: apache-2.0
8
  ---
9
  ## Getting Started
10
 
11
- SlimPajama-627B consists of 59166 jsonl files. It is a cleaned and deduplicated version of [Together Computer's RedPajama](https://github.com/togethercomputer/redpajama-data).
12
 
13
- You can download the dataset using [Hugging Face datasets](https://huggingface.co/docs/datasets/load_hub):
14
  ```python
15
  from datasets import load_dataset
16
  ds = load_dataset("cerebras/SlimPajama-627B")
@@ -18,39 +17,32 @@ ds = load_dataset("cerebras/SlimPajama-627B")
18
 
19
  ## Background
20
 
21
- We release SlimPajama – the largest deduplicated, multi-corpora, open-source, dataset for training large language models. SlimPajama was created by cleaning and deduplicating the RedPajama dataset from Together Computer via MinHashLSH. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens! We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs less than 627B tokens. When upsampled, we expect SlimPajama to perform equal or better than RedPajama-1T when training at trillion token scale. This release was made possible with the support of our customer OpenTensor. We believe SlimPajama is currently the most attractive open-source dataset because it offers the highest data quality through strict deduplication and the inclusion of curated data sources. The dataset can easily be upsampled to increase the number of tokens and precisely control the amount of duplication present.
22
-
23
- Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to Trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several optimizations to existing solutions to produce infrastructure that can perform MinHashLSH deduplication on Trillion token datasets in a distributed, multi-threaded and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to develop higher quality, deduplicated datasets in the future.
24
-
25
-
26
- ### Our observations of the original data set
27
-
28
- 1. RedPajama contains a portion of partially downloaded files.
29
- 2. Some (~2%) of the examples contain empty text. They were downloaded correctly, but do not have useful content that a model can be trained on.
30
- 3. There are many (~50%) duplicates in the data. The RedPajama team deduplicated some sources (Books, GitHub, Commoncrawl), but did not deduplicate all sources.
31
 
 
32
 
33
  ### Our contributions
34
 
35
- 1. SlimPajama 627B – the largest deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license.
36
- 2. Releasing validation and test sets, ~500M tokens each, which the training data has been decontaminated against.
37
- 3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
38
-
39
 
40
- The full set of scripts to recreate the dataset from the original RedPajama dataset is available on the Cerebras github. The blog post detailing our cleaning and deduplication process can be found in the SlimPajama blog post.
41
 
42
  ## Dataset Summary
43
 
 
 
44
  #### Comparison of dataset features
45
- | Dataset | Tokens | Open Source | Curated Data Sources | Deduplicated |
46
- | --------------- | ------ | ----------- | -------------------- | ------------ |
47
- | SlimPajama | 627B | **Yes** | **Yes** | **Yes** |
48
- | RedPajama | 1.21T | **Yes** | **Yes** | No |
49
- | RefinedWeb-600B | 600B | **Yes** | No | **Yes** |
50
- | RefinedWeb-5T | 5T | No | No | **Yes** |
51
- | LLaMA | 1.4T | No | **Yes** | **Yes** |
52
- | MPT | 1T | No | **Yes** | No |
53
- | MassiveText | 1.4T | No | **Yes** | **Yes** |
54
 
55
 
56
  #### Document low-length filter rates
@@ -66,18 +58,18 @@ The full set of scripts to recreate the dataset from the original RedPajama data
66
  | StackExchange | 0.32% |
67
  | Total | 1.86% |
68
 
69
- #### Byte deduplication rates
70
 
71
- | Data source | Dedupe byte prune rate |
72
- | ------------- | ---------------------- |
73
- | Commoncrawl | 63.76% |
74
- | C4 | 6.85% |
75
- | GitHub | 46.16% |
76
- | Books | 2.01% |
77
- | ArXiv | 0.06% |
78
- | Wikipedia | 2.24% |
79
- | StackExchange | 0.20% |
80
- | Total | 49.60% |
81
 
82
  #### Data source proportions for SlimPajama and RedPajama
83
 
@@ -110,7 +102,7 @@ The dataset consists of jsonl files, with structure as follows:
110
 
111
  ### Dataset Creation
112
 
113
- SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together Computer](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMa](https://arxiv.org/abs/2302.13971) data collection methodology.
114
 
115
 
116
  ### Source Data
@@ -121,12 +113,13 @@ The data sources composing RedPajama are explained in [its model card](https://h
121
  To cite SlimPajama, please use:
122
 
123
  ```
124
- @software{cerebras2023slimpajama,
125
- author = {Cerebras Systems},
126
- title = {SlimPajama: A 627B token cleaned and deduplicated version of RedPajama},
127
  month = June,
128
  year = 2023,
129
- url = {TODO: Blog URL}
 
130
  }
131
  ```
132
 
 
3
  - text-generation
4
  language:
5
  - en
6
+ pretty_name: SlimPajama-627B
 
7
  ---
8
  ## Getting Started
9
 
10
+ The dataset consists of 59166 jsonl files. It is a cleaned and deduplicated version of [Together Computer's RedPajama](https://github.com/togethercomputer/redpajama-data). Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods.
11
 
12
+ You can download the dataset using Hugging Face datasets:
13
  ```python
14
  from datasets import load_dataset
15
  ds = load_dataset("cerebras/SlimPajama-627B")
 
17
 
18
  ## Background
19
 
20
+ Today we are releasing SlimPajama – the largest extensively deduplicated, multi-corpora, open-source dataset for training large language models. SlimPajama was created by cleaning and deduplicating the 1.2T token RedPajama dataset from Together. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens. We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs up to 627B tokens. When upsampled, we expect SlimPajama to perform equal to or better than RedPajama-1T when training at trillion token scale.
 
 
 
 
 
 
 
 
 
21
 
22
+ In addition to the data, we are also releasing the tools we built to create SlimPajama. Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several improvements to existing solutions to produce an infrastructure that can perform MinHashLSH deduplication on trillion token datasets in a distributed, multi-threaded, and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to easily create higher quality, extensively deduplicated datasets in the future.
23
 
24
  ### Our contributions
25
 
26
+ 1. SlimPajama 627B – the largest extensively deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license.
27
+ 2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data.
28
+ 3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
 
29
 
30
+ The full set of scripts to recreate the dataset from the original RedPajama dataset will be available on the Cerebras github. A deeper explanation of our cleaning and deduplication process can be found in the [SlimPajama blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama).
31
 
32
  ## Dataset Summary
33
 
34
+ The [latest research](https://arxiv.org/abs/2306.01116) has shown that data quality is as important as data quantity. While training on more than one data epoch can be beneficial, this should be a choice rather than a side-effect of duplicates in the dataset. We decided to extensively deduplicate RedPajama to produce a dataset with higher information density. This means when using SlimPajama, you can achieve higher accuracy with the same compute budget when compared to other datasets.
35
+
36
  #### Comparison of dataset features
37
+ | Data source | Tokens | Open Source | Curated Data Sources | Deduplication Level |
38
+ | --------------- | ------- | ----------- | -------------------- | ------------------- |
39
+ | SlimPajama | **627B**| Yes | Yes | **Extensive** |
40
+ | RedPajama | 1.21T | Yes | Yes | Partial |
41
+ | RefinedWeb-600B | 600B | Yes | No | **Extensive** |
42
+ | RefinedWeb-5T | **5T** | No | No | **Extensive** |
43
+ | LLaMA | 1.4T | No | Yes | Partial |
44
+ | MPT | 1T | No | Yes | Unknown |
45
+ | MassiveText | 1.4T | No | Yes | **Extensive** |
46
 
47
 
48
  #### Document low-length filter rates
 
58
  | StackExchange | 0.32% |
59
  | Total | 1.86% |
60
 
61
+ #### Data source byte deduplication rates
62
 
63
+ | Data source | Byte deduplication rate |
64
+ | ------------- | ---------------------- |
65
+ | Commoncrawl | 63.76% |
66
+ | C4 | 6.85% |
67
+ | GitHub | 46.16% |
68
+ | Books | 2.01% |
69
+ | ArXiv | 0.06% |
70
+ | Wikipedia | 2.24% |
71
+ | StackExchange | 0.20% |
72
+ | Total | 49.60% |
73
 
74
  #### Data source proportions for SlimPajama and RedPajama
75
 
 
102
 
103
  ### Dataset Creation
104
 
105
+ SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together Computer](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMA](https://arxiv.org/abs/2302.13971) data collection methodology.
106
 
107
 
108
  ### Source Data
 
113
  To cite SlimPajama, please use:
114
 
115
  ```
116
+ @misc{cerebras2023slimpajama,
117
+ author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert, Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
118
+ title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
119
  month = June,
120
  year = 2023,
121
+ howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
122
+ url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
123
  }
124
  ```
125