File size: 4,961 Bytes
c4c4a36
6b5a512
 
 
 
 
 
 
 
c4c4a36
490daab
ffddd15
 
 
 
3b80002
c8051ae
8e81d3f
c8051ae
8a9c55d
ffddd15
490daab
 
 
ffddd15
 
490daab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67992ab
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
language:
- nl
size_categories:
- 10B<n<100B
task_categories:
- text-generation
- text2text-generation
pretty_name: Filtered CulturaX + Wikipedia for Dutch
---

# Filtered CulturaX + Wikipedia for Dutch

This is a combined and filtered version of [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia), only including Dutch. It is intended for the training of LLMs.

Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here. 

Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).

Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.


## Filtering

While CultruaX already has done a lot of filtering, some more filtering can be done to improve the quality of the corpus. These filters are described below.

The baseline ratios (punctuation, uppercase, digits) were calculated on the SONAR-500 corpus (excluding WRPEA WRPED WRUEA WRUED WRUEB).

**CulturaX**:
- removed documents that contain the text "rechten voorbehouden" or "rights reserved"
- remove document's whose URL contained "wikipedia.org" (because we include a cleaned version of Wikipedia ourselves)
- removed documents that contain a "bad word" (see the section below)
- removed documents that contain any non-latin characters. The idea is that "knowledge"-based information (e.g. original writing of a name) are allowed
 when the data comes from Wikipedia, but not from any other webcrawl, to avoid unsollicited noise.

**CulturaX + Wikipedia**:
- removed documents where ratio of punctuation marks vs. non-whitespace characters is higher than 0.2
- removed documents where ratio of uppercase vs. non-whitespace characters is higher than 0.22
- removed documents where ratio of digits vs. non-whitespace characters is higher than 0.16
- removed documents where the average token length is < 2 or > 20

## Bad words

```python
BAD_PHRASES_DOC_LEVEL = {
    # https://en.wikipedia.org/wiki/Dutch_profanity
    "achterlijk",
    "debiel",
    "downie",
    "idioot",
    "kankerlijer",
    "klere",
    "kolere",
    "minkukel",
    "pestkop",
    "pleuris",
    "pleuritis",
    "teringlijer",
    "tyfuslijer",
    "gadver",
    "getver",
    "godver",
    "godskolere",
    "godverork",
    "graftak",
    "kopvod",
    "verdomme",
    "anaalgeneraal",
    "bitch",
    "dikzak",
    "flikker",
    "fok",
    "fuck",
    "hoer",
    "klootzak",
    "klote",
    "kreng",
    "kringspiermusketier",
    "kut",
    "lamzak",
    "lul",
    "manwijf",
    "matennaai",
    "neuken",
    "neuker",
    "ouwehoer",
    "reet",
    "reetkever",
    "reetridder",
    "rotzak",
    "schijt",
    "shit",
    "slet",
    "slijmbal",
    "slons",
    "sodemieter",
    "stoephoer",
    "swaffel",
    "teef",
    "trut",
    "tut",
    "zak",
    "uilskuiken",
    "zeik",
    "bamivreter",
    "bosneger",
    "neger",
    "fransoos",
    "geitenneuker",
    "kaaskop",
    "kakker",
    "koelie",
    "lijp",
    "medelander",
    "mocro",
    "mof",
    "nikker",
    "poepchinees",
    "roetmop",
    "spaghettivreter",
    "loempiavouwer",
    "spanjool",
    "spleetoog",
    "tatta",
    "tokkie",
    "zandneger",
    "zwartzak",
    "halvezool",
    "kenau",
    "klootviool",
    "knuppel",
    "koekert",
    "koekwaus",
    "oelewapper",
    "smeerlap",
    "sukkel",
    "sul",
    "wappie",
    "wijf",
    "zooi",
    # xxx (a.o. https://gitlab.com/yhavinga/c4nlpreproc/-/blob/master/clean/badwords_ennl.py?ref_type=heads)
    "xxx",
    "anal",
    "blowjob",
    "buttplug",
    "cock",
    "cunt",
    "geil",
    "sex",  # Standaardnederlands = seks, maybe we catch some porn or socialmedia sites with this misspelling
    "porn",
    # extra
    "nigger",
    "nigga",
    "hoerig",
    "klojo",
}
```

## Config details


## License information

For CulturaX: https://huggingface.co/datasets/uonlp/CulturaX#license-information
For Wikipedia: https://huggingface.co/datasets/wikimedia/wikipedia#licensing-information