BramVanroy's picture
Upload dataset (part 00005-of-00006)
3c100e3 verified
|
raw
history blame
8.57 kB
metadata
language:
  - nl
size_categories:
  - 10B<n<100B
task_categories:
  - text-generation
  - text2text-generation
pretty_name: Filtered CulturaX + Wikipedia for Dutch
dataset_info:
  - config_name: 100M
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 738455828.5851797
        num_examples: 1018200
      - name: test
        num_bytes: 7458534.414820259
        num_examples: 10284
    download_size: 411183119
    dataset_size: 745914363
  - config_name: 100k
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 745955.3074739829
        num_examples: 1047
      - name: test
        num_bytes: 7124.692526017029
        num_examples: 10
    download_size: 366788
    dataset_size: 753080
  - config_name: 10B
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 66539945646.34457
        num_examples: 40176566
      - name: test
        num_bytes: 105996030.65543362
        num_examples: 64000
    download_size: 42132184504
    dataset_size: 66645941677
  - config_name: 10M
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 76734151.72157606
        num_examples: 139851
      - name: test
        num_bytes: 774743.2784239326
        num_examples: 1412
    download_size: 37995388
    dataset_size: 77508895
  - config_name: 10k
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 72048.30379746835
        num_examples: 78
      - name: test
        num_bytes: 5896
        num_examples: 1
    download_size: 47197
    dataset_size: 77944.30379746835
  - config_name: 1B
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 6797502496.392602
        num_examples: 5102360
      - name: test
        num_bytes: 68660322.60739774
        num_examples: 51538
    download_size: 4260450464
    dataset_size: 6866162819
  - config_name: 1M
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 7442665.619329753
        num_examples: 10694
      - name: test
        num_bytes: 75164.38067024625
        num_examples: 108
    download_size: 3845466
    dataset_size: 7517830
  - config_name: 5B
    features:
      - name: text
        dtype: string
      - name: url
        dtype: string
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 33351938314.309906
        num_examples: 20769009
      - name: test
        num_bytes: 102774477.69009268
        num_examples: 64000
    download_size: 21119808690
    dataset_size: 33454712792
configs:
  - config_name: 100M
    data_files:
      - split: train
        path: 100M/train-*
      - split: test
        path: 100M/test-*
  - config_name: 100k
    data_files:
      - split: train
        path: 100k/train-*
      - split: test
        path: 100k/test-*
  - config_name: 10B
    data_files:
      - split: train
        path: 10B/train-*
      - split: test
        path: 10B/test-*
  - config_name: 10M
    data_files:
      - split: train
        path: 10M/train-*
      - split: test
        path: 10M/test-*
  - config_name: 10k
    data_files:
      - split: train
        path: 10k/train-*
      - split: test
        path: 10k/test-*
  - config_name: 1B
    data_files:
      - split: train
        path: 1B/train-*
      - split: test
        path: 1B/test-*
  - config_name: 1M
    data_files:
      - split: train
        path: 1M/train-*
      - split: test
        path: 1M/test-*
  - config_name: 5B
    data_files:
      - split: train
        path: 5B/train-*
      - split: test
        path: 5B/test-*

Filtered CulturaX + Wikipedia for Dutch

This is a combined and filtered version of CulturaX and Wikipedia, only including Dutch. It is intended for the training of LLMs.

Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here.

Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).

Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.

Filtering

While CultruaX already has done a lot of filtering, some more filtering can be done to improve the quality of the corpus. These filters are described below.

The baseline ratios (punctuation, uppercase, digits) were calculated on the SONAR-500 corpus (excluding WRPEA WRPED WRUEA WRUED WRUEB).

CulturaX:

  • removed documents that contain the text "rechten voorbehouden" or "rights reserved"
  • remove document's whose URL contained "wikipedia.org" (because we include a cleaned version of Wikipedia ourselves)
  • removed documents that contain a "bad word" (see the section below)
  • removed documents that contain any non-latin characters. The idea is that "knowledge"-based information (e.g. original writing of a name) are allowed when the data comes from Wikipedia, but not from any other webcrawl, to avoid unsollicited noise.

CulturaX + Wikipedia:

  • removed documents where ratio of punctuation marks vs. non-whitespace characters is higher than 0.2
  • removed documents where ratio of uppercase vs. non-whitespace characters is higher than 0.22
  • removed documents where ratio of digits vs. non-whitespace characters is higher than 0.16
  • removed documents where the average token length is < 2 or > 20

Bad words

BAD_PHRASES_DOC_LEVEL = {
    # https://en.wikipedia.org/wiki/Dutch_profanity
    "achterlijk",
    "debiel",
    "downie",
    "idioot",
    "kankerlijer",
    "klere",
    "kolere",
    "minkukel",
    "pestkop",
    "pleuris",
    "pleuritis",
    "teringlijer",
    "tyfuslijer",
    "gadver",
    "getver",
    "godver",
    "godskolere",
    "godverork",
    "graftak",
    "kopvod",
    "verdomme",
    "anaalgeneraal",
    "bitch",
    "dikzak",
    "flikker",
    "fok",
    "fuck",
    "hoer",
    "klootzak",
    "klote",
    "kreng",
    "kringspiermusketier",
    "kut",
    "lamzak",
    "lul",
    "manwijf",
    "matennaai",
    "neuken",
    "neuker",
    "ouwehoer",
    "reet",
    "reetkever",
    "reetridder",
    "rotzak",
    "schijt",
    "shit",
    "slet",
    "slijmbal",
    "slons",
    "sodemieter",
    "stoephoer",
    "swaffel",
    "teef",
    "trut",
    "tut",
    "zak",
    "uilskuiken",
    "zeik",
    "bamivreter",
    "bosneger",
    "neger",
    "fransoos",
    "geitenneuker",
    "kaaskop",
    "kakker",
    "koelie",
    "lijp",
    "medelander",
    "mocro",
    "mof",
    "nikker",
    "poepchinees",
    "roetmop",
    "spaghettivreter",
    "loempiavouwer",
    "spanjool",
    "spleetoog",
    "tatta",
    "tokkie",
    "zandneger",
    "zwartzak",
    "halvezool",
    "kenau",
    "klootviool",
    "knuppel",
    "koekert",
    "koekwaus",
    "oelewapper",
    "smeerlap",
    "sukkel",
    "sul",
    "wappie",
    "wijf",
    "zooi",
    # xxx (a.o. https://gitlab.com/yhavinga/c4nlpreproc/-/blob/master/clean/badwords_ennl.py?ref_type=heads)
    "xxx",
    "anal",
    "blowjob",
    "buttplug",
    "cock",
    "cunt",
    "geil",
    "sex",  # Standaardnederlands = seks, maybe we catch some porn or socialmedia sites with this misspelling
    "porn",
    # extra
    "nigger",
    "nigga",
    "hoerig",
    "klojo",
}

Config details

License information

For CulturaX: https://huggingface.co/datasets/uonlp/CulturaX#license-information For Wikipedia: https://huggingface.co/datasets/wikimedia/wikipedia#licensing-information