jealk commited on
Commit
7b3bf46
·
verified ·
1 Parent(s): 78442f4

Updated readme, updated Dan Saatrup's existing readme to fit the modified repo.

Browse files
Files changed (1) hide show
  1. README.md +106 -53
README.md CHANGED
@@ -1,53 +1,106 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- dataset_info:
4
- - config_name: default
5
- features:
6
- - name: text
7
- dtype: string
8
- splits:
9
- - name: train
10
- num_bytes: 203133024
11
- num_examples: 109486
12
- - name: validation
13
- num_bytes: 11424453
14
- num_examples: 6173
15
- - name: test
16
- num_bytes: 11808744
17
- num_examples: 6219
18
- download_size: 143418920
19
- dataset_size: 226366221
20
- - config_name: sentences
21
- features:
22
- - name: text
23
- dtype: string
24
- splits:
25
- - name: train
26
- num_bytes: 202232488.28022403
27
- num_examples: 1572268
28
- - name: validation
29
- num_bytes: 11383118.592627235
30
- num_examples: 88647
31
- - name: test
32
- num_bytes: 11756845.828945814
33
- num_examples: 90769
34
- download_size: 149698561
35
- dataset_size: 225372452.70179707
36
- configs:
37
- - config_name: default
38
- data_files:
39
- - split: train
40
- path: data/train-*
41
- - split: validation
42
- path: data/validation-*
43
- - split: test
44
- path: data/test-*
45
- - config_name: sentences
46
- data_files:
47
- - split: train
48
- path: sentences/train-*
49
- - split: validation
50
- path: sentences/validation-*
51
- - split: test
52
- path: sentences/test-*
53
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ dataset_info:
4
+ - config_name: default
5
+ features:
6
+ - name: text
7
+ dtype: string
8
+ splits:
9
+ - name: train
10
+ num_bytes: 203133024
11
+ num_examples: 109486
12
+ - name: validation
13
+ num_bytes: 11424453
14
+ num_examples: 6173
15
+ - name: test
16
+ num_bytes: 11808744
17
+ num_examples: 6219
18
+ download_size: 143418920
19
+ dataset_size: 226366221
20
+ - config_name: sentences
21
+ features:
22
+ - name: text
23
+ dtype: string
24
+ splits:
25
+ - name: train
26
+ num_bytes: 202232488.28022403
27
+ num_examples: 1572268
28
+ - name: validation
29
+ num_bytes: 11383118.592627235
30
+ num_examples: 88647
31
+ - name: test
32
+ num_bytes: 11756845.828945814
33
+ num_examples: 90769
34
+ download_size: 149698561
35
+ dataset_size: 225372452.70179707
36
+ configs:
37
+ - config_name: default
38
+ data_files:
39
+ - split: train
40
+ path: data/train-*
41
+ - split: validation
42
+ path: data/validation-*
43
+ - split: test
44
+ path: data/test-*
45
+ - config_name: sentences
46
+ data_files:
47
+ - split: train
48
+ path: sentences/train-*
49
+ - split: validation
50
+ path: sentences/validation-*
51
+ - split: test
52
+ path: sentences/test-*
53
+ ---
54
+
55
+ # Dataset Card for "wiki40b-da-clean"
56
+
57
+
58
+ ### Dataset Summary
59
+
60
+ This dataset is an slightly modified and filtered version of [Wiki40b-da daset](https://huggingface.co/datasets/alexandrainst/wiki40b-da/) which is a fork of [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b).
61
+
62
+ The dataset contains two sub-sets, for which the original columns "wikidata_id" and "version_id" are removed from both:
63
+ - "**text**": Contains the filtered text of the Wikipedia paragraphs, with formatting removed (_START_ARTICLE_, _START_PARAGRAPH_ and \n removed)
64
+ - "**sentences**" Contains the sentences from all the 'text' dataset, filtered to only include sentences >5 and <100 words (split after all punctuations (!,?,.) that is followed by a space and a capital letter)
65
+
66
+ The dataset is curated to use the "text" config for masked next token prediction (MNTP) and the sentences config for SimCSE in relation to training encoder and decoder models.
67
+
68
+ The training, validation and test splits are the original ones.
69
+
70
+
71
+ ### Languages
72
+
73
+ The dataset is available in Danish (`da`).
74
+
75
+
76
+ ## Dataset
77
+
78
+ **text** (default)
79
+ An example from the text dataset looks as follows.
80
+ ```
81
+ {
82
+ 'text': "Tekstiler havde mange forskellige formål i oldtidens Ægypten, og blev brugt af (...)",
83
+ }
84
+ ```
85
+
86
+ **sentences**
87
+ An example from the sentences dataset looks as follows.
88
+ ```
89
+ {
90
+ 'text': "Det tog tre måneder, før hørren kunne høstes.",
91
+ }
92
+ ```
93
+
94
+ ## Additional Information
95
+
96
+ ### Dataset Curators
97
+
98
+ [Jesper Alkestrup](https://github.com/jalkestrup) from the [The Tech Collective](https://thetechcollective.eu/) filtered and uploaded the dataset to the Hugging Face Hub.
99
+
100
+ Thanks to [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
101
+ Institute](https://alexandra.dk/) for uploading [Wiki40b-da daset](https://huggingface.co/datasets/alexandrainst/wiki40b-da/).
102
+
103
+ ### Licensing Information
104
+
105
+ The dataset is licensed under the [CC-BY-SA
106
+ license](https://creativecommons.org/licenses/by-sa/4.0/).