BramVanroy
commited on
Commit
•
3b80002
1
Parent(s):
8947c26
Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,9 @@ pretty_name: Filtered CulturaX + Wikipedia for Dutch
|
|
11 |
|
12 |
# Filtered CulturaX + Wikipedia for Dutch
|
13 |
|
14 |
-
Warning: there's currently a mismatch between configs. the smallest ones (up to 100M) are counted as white-space tokens, the other ones are counted as tokenized by gemma. In the future, all configs will be based on white-space tokens.
|
15 |
-
|
16 |
-
|
17 |
This is a combined and filtered version of [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia), only including Dutch. It is intended for the training of LLMs.
|
18 |
|
19 |
-
Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too.
|
20 |
|
21 |
Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).
|
22 |
|
@@ -161,122 +158,6 @@ BAD_PHRASES_DOC_LEVEL = {
|
|
161 |
|
162 |
## Config details
|
163 |
|
164 |
-
`10k`
|
165 |
-
- ratio_wikipedia: 100.00%
|
166 |
-
- total_num_tokens: 10,078
|
167 |
-
- train_num_tokens: 9,957
|
168 |
-
- test_num_tokens: 121
|
169 |
-
- total_num_samples: 38
|
170 |
-
- train_num_samples: 37
|
171 |
-
- test_num_samples: 1
|
172 |
-
|
173 |
-
`100k`
|
174 |
-
- ratio_wikipedia: 100.00%
|
175 |
-
- total_num_tokens: 100,099
|
176 |
-
- train_num_tokens: 99,537
|
177 |
-
- test_num_tokens: 562
|
178 |
-
- total_num_samples: 303
|
179 |
-
- train_num_samples: 300
|
180 |
-
- test_num_samples: 3
|
181 |
-
|
182 |
-
`1M`
|
183 |
-
- ratio_wikipedia: 100.00%
|
184 |
-
- total_num_tokens: 1,000,104
|
185 |
-
- train_num_tokens: 987,432
|
186 |
-
- test_num_tokens: 12,672
|
187 |
-
- total_num_samples: 2,722
|
188 |
-
- train_num_samples: 2,695
|
189 |
-
- test_num_samples: 27
|
190 |
-
|
191 |
-
`10M`
|
192 |
-
- ratio_wikipedia: 100.00%
|
193 |
-
- total_num_tokens: 10,000,692
|
194 |
-
- train_num_tokens: 9,905,387
|
195 |
-
- test_num_tokens: 95,305
|
196 |
-
- total_num_samples: 25,641
|
197 |
-
- train_num_samples: 25,385
|
198 |
-
- test_num_samples: 256
|
199 |
-
|
200 |
-
`100M`
|
201 |
-
- ratio_wikipedia: 100.00%
|
202 |
-
- total_num_tokens: 100,000,049
|
203 |
-
- train_num_tokens: 99,022,731
|
204 |
-
- test_num_tokens: 977,318
|
205 |
-
- total_num_samples: 237,578
|
206 |
-
- train_num_samples: 235,203
|
207 |
-
- test_num_samples: 2,375
|
208 |
-
|
209 |
-
`1B`
|
210 |
-
- ratio_wikipedia: 82.38%
|
211 |
-
- total_num_tokens: 1,000,000,003
|
212 |
-
- train_num_tokens: 990,064,856
|
213 |
-
- test_num_tokens: 9,935,147
|
214 |
-
- total_num_samples: 2,869,233
|
215 |
-
- train_num_samples: 2,840,541
|
216 |
-
- test_num_samples: 28,692
|
217 |
-
|
218 |
-
`5B`
|
219 |
-
- ratio_wikipedia: 35.62%
|
220 |
-
- total_num_tokens: 5,000,000,224
|
221 |
-
- train_num_tokens: 4,974,586,006
|
222 |
-
- test_num_tokens: 25,414,218
|
223 |
-
- total_num_samples: 12,603,939
|
224 |
-
- train_num_samples: 12,539,939
|
225 |
-
- test_num_samples: 64,000
|
226 |
-
|
227 |
-
`10B`
|
228 |
-
- ratio_wikipedia: 26.86%
|
229 |
-
- total_num_tokens: 10,000,000,658
|
230 |
-
- train_num_tokens: 9,973,803,589
|
231 |
-
- test_num_tokens: 26,197,069
|
232 |
-
- total_num_samples: 24,628,921
|
233 |
-
- train_num_samples: 24,564,921
|
234 |
-
- test_num_samples: 64,000
|
235 |
-
|
236 |
-
`15B`
|
237 |
-
- ratio_wikipedia: 23.85%
|
238 |
-
- total_num_tokens: 15,000,001,092
|
239 |
-
- train_num_tokens: 14,973,654,717
|
240 |
-
- test_num_tokens: 26,346,375
|
241 |
-
- total_num_samples: 36,653,903
|
242 |
-
- train_num_samples: 36,589,903
|
243 |
-
- test_num_samples: 64,000
|
244 |
-
|
245 |
-
`20B`
|
246 |
-
- ratio_wikipedia: 22.32%
|
247 |
-
- total_num_tokens: 20,000,000,303
|
248 |
-
- train_num_tokens: 19,973,764,973
|
249 |
-
- test_num_tokens: 26,235,330
|
250 |
-
- total_num_samples: 48,678,883
|
251 |
-
- train_num_samples: 48,614,883
|
252 |
-
- test_num_samples: 64,000
|
253 |
-
|
254 |
-
`25B`
|
255 |
-
- ratio_wikipedia: 21.40%
|
256 |
-
- total_num_tokens: 25,000,000,737
|
257 |
-
- train_num_tokens: 24,973,747,815
|
258 |
-
- test_num_tokens: 26,252,922
|
259 |
-
- total_num_samples: 60,703,865
|
260 |
-
- train_num_samples: 60,639,865
|
261 |
-
- test_num_samples: 64,000
|
262 |
-
|
263 |
-
`30B`
|
264 |
-
- ratio_wikipedia: 20.79%
|
265 |
-
- total_num_tokens: 30,000,000,034
|
266 |
-
- train_num_tokens: 29,973,830,841
|
267 |
-
- test_num_tokens: 26,169,193
|
268 |
-
- total_num_samples: 72,728,846
|
269 |
-
- train_num_samples: 72,664,846
|
270 |
-
- test_num_samples: 64,000
|
271 |
-
|
272 |
-
`35B`
|
273 |
-
- ratio_wikipedia: 20.35%
|
274 |
-
- total_num_tokens: 35,000,000,468
|
275 |
-
- train_num_tokens: 34,973,480,399
|
276 |
-
- test_num_tokens: 26,520,069
|
277 |
-
- total_num_samples: 84,753,828
|
278 |
-
- train_num_samples: 84,689,828
|
279 |
-
- test_num_samples: 64,000
|
280 |
|
281 |
## License information
|
282 |
|
|
|
11 |
|
12 |
# Filtered CulturaX + Wikipedia for Dutch
|
13 |
|
|
|
|
|
|
|
14 |
This is a combined and filtered version of [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia), only including Dutch. It is intended for the training of LLMs.
|
15 |
|
16 |
+
Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here.
|
17 |
|
18 |
Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).
|
19 |
|
|
|
158 |
|
159 |
## Config details
|
160 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
|
162 |
## License information
|
163 |
|