rstodden commited on
Commit
d7683fd
·
1 Parent(s): 1e8be53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -40
README.md CHANGED
@@ -6,10 +6,12 @@ language:
6
  pretty_name: DEplain-web
7
  size_categories:
8
  - 1K<n<10K
 
 
9
  ---
10
 
11
  # Dataset Card for DEplain-web
12
-
13
 
14
  ## Table of Contents
15
  - [Dataset Description](#dataset-description)
@@ -35,119 +37,212 @@ size_categories:
35
  - [Citation Information](#citation-information)
36
  - [Contributions](#contributions)
37
 
38
- ## Dataset Description
39
 
40
  - **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain)
41
  - **Paper:** Regina Stodden, Momen Omar, and Laura Kallmeyer. 2023. ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939). In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
42
  - **Point of Contact:** [Regina Stodden]([email protected])
43
 
44
- ### Dataset Summary
45
 
46
  [DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned.
47
  This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/).
48
 
49
- ### Supported Tasks and Leaderboards
50
 
51
  The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
52
 
53
- ### Languages
54
 
55
  The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache).
56
 
57
- ### Domains
58
  The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts.
59
 
60
- ## Dataset Structure
61
 
62
- ### Data Access
63
 
64
  - The dataset is licensed with different open licenses dependent on the subcorpora.
65
 
66
- ### Data Instances
67
  - `document-simplification` configuration: an instance consists of an original document and one reference simplification.
68
  - `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification.
69
  - `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same.
70
 
71
 
72
- ### Data Fields
73
-
74
- - `original`: an original text from the source datasets written for people with German skills equal to CEFR level B1
75
- - `simplification`: a simplified text from the source datasets written for people with German skills equal to CEFR level A2
76
- - more metadata is added to the dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
 
79
- ### Data Splits
 
80
  DEplain-web contains a training set, development set and a test set.
81
  The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)).
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
- | | Train | Dev | Test | Total |
85
- | ----- | ------ | ------ | ---- | ----- |
86
- | Document Pairs | 481 | 122 | 147 | 756
87
- | Sentence Pairs | 1281 | 313 | 1846 | 3440
88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
- Here, more information on simplification operations will follow soon.
91
 
92
- ## Dataset Creation
93
 
94
- ### Curation Rationale
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
  Current German text simplification datasets are limited in their size or are only automatically evaluated.
97
  We provide a manually aligned corpus to boost text simplification research in German.
98
 
99
- ### Source Data
100
 
101
- #### Initial Data Collection and Normalization
102
  The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification).
103
  The texts of the documents were manually simplified by professional translators.
104
  The data was split into sentences using a German model of SpaCy.
105
  Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/).
106
 
107
- #### Who are the source language producers?
108
  The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939).
109
 
110
- ### Annotations
111
 
112
- #### Annotation process
113
 
114
  The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
115
 
116
- #### Who are the annotators?
117
 
118
  The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
119
  They are not part of any target group of text simplification.
120
 
121
- ### Personal and Sensitive Information
122
 
123
  No sensitive data.
124
 
125
- ## Considerations for Using the Data
126
 
127
- ### Social Impact of Dataset
128
 
129
  Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
130
 
131
- ### Discussion of Biases
132
 
133
  no bias is known.
134
 
135
- ### Other Known Limitations
136
 
137
  The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information.
138
 
139
- ## Additional Information
140
 
141
- ### Dataset Curators
142
 
143
  DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''.
144
 
145
- ### Licensing Information
 
 
146
 
147
- The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, CC-BY-NC-ND-4, MIT.
148
 
149
- ### Citation Information
150
 
151
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
152
 
153
- This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).
 
6
  pretty_name: DEplain-web
7
  size_categories:
8
  - 1K<n<10K
9
+ task_ids:
10
+ - text-simplification
11
  ---
12
 
13
  # Dataset Card for DEplain-web
14
+ In the following, we provide a dataset for DEplain-APA (following Huggingface's data cards).
15
 
16
  ## Table of Contents
17
  - [Dataset Description](#dataset-description)
 
37
  - [Citation Information](#citation-information)
38
  - [Contributions](#contributions)
39
 
40
+ ### Dataset Description
41
 
42
  - **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain)
43
  - **Paper:** Regina Stodden, Momen Omar, and Laura Kallmeyer. 2023. ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939). In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
44
  - **Point of Contact:** [Regina Stodden]([email protected])
45
 
46
+ #### Dataset Summary
47
 
48
  [DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned.
49
  This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/).
50
 
51
+ #### Supported Tasks and Leaderboards
52
 
53
  The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
54
 
55
+ #### Languages
56
 
57
  The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache).
58
 
59
+ #### Domains
60
  The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts.
61
 
62
+ ### Dataset Structure
63
 
64
+ #### Data Access
65
 
66
  - The dataset is licensed with different open licenses dependent on the subcorpora.
67
 
68
+ #### Data Instances
69
  - `document-simplification` configuration: an instance consists of an original document and one reference simplification.
70
  - `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification.
71
  - `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same.
72
 
73
 
74
+ #### Data Fields
75
+
76
+
77
+ | data field | data field description |
78
+ |-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
79
+ | `original` | an original text from the source dataset |
80
+ | `simplification` | a simplified text from the source dataset |
81
+ | `pair_id` | document pair id |
82
+ | `complex_document_id ` (on doc-level) | id of complex document (-1) |
83
+ | `simple_document_id ` (on doc-level) | id of simple document (-0) |
84
+ | `original_id ` (on sent-level) | id of sentence(s) of the original text |
85
+ | `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
86
+ | `domain ` | text domain of the document pair |
87
+ | `corpus ` | subcorpus name |
88
+ | `simple_url ` | origin URL of the simplified document |
89
+ | `complex_url ` | origin URL of the simplified document |
90
+ | `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
91
+ | `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
92
+ | `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
93
+ | `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
94
+ | `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
95
+ | `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
96
+ | `alignment_location ` | location on hard disk where the alignment is stored |
97
+ | `simple_author ` | author (or copyright owner) of the simplified document |
98
+ | `complex_author ` | author (or copyright owner) of the original document |
99
+ | `simple_title ` | title of the simplified document |
100
+ | `complex_title ` | title of the original document |
101
+ | `license ` | license of the data |
102
+ | `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
103
+ | `rater` | id of the rater who annotated the sentence pair |
104
+ | `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
105
 
106
 
107
+
108
+ #### Data Splits
109
  DEplain-web contains a training set, development set and a test set.
110
  The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)).
111
 
112
+ Document-level:
113
+
114
+ | | Train | Dev | Test | Total |
115
+ |-------------------------|-------|-----|------|-------|
116
+ | DEplain-web-manual-open | - | - | 147 | 147 |
117
+ | DEplain-web-auto-open | 199 | 50 | - | 279 |
118
+ | DEplain-web-auto-closed | 288 | 72 | - | 360 |
119
+ | in total | 487 | 122 | 147 | 756 |
120
+
121
+ Sentence-level:
122
+
123
+ | | Train | Dev | Test | Total |
124
+ |-------------------------|-------|-----|------|-------|
125
+ | DEplain-web-manual-open | - | - | 1846 | 1846 |
126
+ | DEplain-web-auto-open | 514 | 138 | - | 652 |
127
+ | DEplain-web-auto-closed | 767 | 175 | - | 942 |
128
+ | in total | 1281 | 313 | 1846 | |
129
+
130
 
 
 
 
 
131
 
132
+ | **subcorpus** | **simple** | **complex** | **domain** | **description** | **\# doc.** |
133
+ |----------------------------------|------------------|------------------|------------------|-------------------------------------------------------------------------------|------------------|
134
+ | **EinfacheBücher** | Plain German | Standard German / Old German | fiction | Books in plain German | 15 |
135
+ | **EinfacheBücherPassanten** | Plain German | Standard German / Old German | fiction | Books in plain German | 4 |
136
+ | **ApothekenUmschau** | Plain German | Standard German | health | Health magazine in which diseases are explained in plain German | 71 |
137
+ | **BZFE** | Plain German | Standard German | health | Information of the German Federal Agency for Food on good nutrition | 18 |
138
+ | **Alumniportal*}** | Plain German | Plain German | language learner | Texts related to Germany and German traditions written for language learners. | 137 |
139
+ | **Lebenshilfe** | Easy-to-read German | Standard German | accessibility | | 49 |
140
+ | **Bibel** | Easy-to-read German | Standard German | bible | Bible texts in easy-to-read German | 221 |
141
+ | **NDR-Märchen ** | Easy-to-read German | Standard German / Old German | fiction | Fairytales in easy-to-read German | 10 |
142
+ | **EinfachTeilhaben** | Easy-to-read German | Standard German | accessibility | | 67 |
143
+ | **StadtHamburg** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Hamburg | 79 |
144
+ | **StadtKöln** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Cologne | 85 |
145
 
146
+ : Documents per Domain in DEplain-web.
147
 
 
148
 
149
+
150
+ | domain | avg. | std. | interpretation | \# sents | \# docs |
151
+ |------------------|---------------|---------------|-------------------------|-------------------|------------------|
152
+ | bible | 0.7011 | 0.31 | moderate | 6903 | 3 |
153
+ | fiction | 0.6131 | 0.39 | moderate | 23289 | 3 |
154
+ | health | 0.5147 | 0.28 | weak | 13736 | 6 |
155
+ | language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 |
156
+ | all | 0.8505 | 0.23 | strong | 87645 | 87 |
157
+
158
+ : Inter-Annotator-Agreement per Domain in DEplain-web-manual.
159
+
160
+ | operation | # documents | percentage |
161
+ |-----------|-------------|------------|
162
+ | rehphrase | 863 | 11.73 |
163
+ | deletion | 3050 | 41.47 |
164
+ | addition | 1572 | 21.37 |
165
+ | identical | 887 | 12.06 |
166
+ | fusion | 110 | 1.5 |
167
+ | merge | 77 | 1.05 |
168
+ | split | 796 | 10.82 |
169
+ | in total | 7355 | 100 |
170
+
171
+ : Information regarding Simplification Operations in DEplain-web-manual.
172
+
173
+ ### Dataset Creation
174
+
175
+ #### Curation Rationale
176
 
177
  Current German text simplification datasets are limited in their size or are only automatically evaluated.
178
  We provide a manually aligned corpus to boost text simplification research in German.
179
 
180
+ #### Source Data
181
 
182
+ ##### Initial Data Collection and Normalization
183
  The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification).
184
  The texts of the documents were manually simplified by professional translators.
185
  The data was split into sentences using a German model of SpaCy.
186
  Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/).
187
 
188
+ ##### Who are the source language producers?
189
  The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939).
190
 
191
+ #### Annotations
192
 
193
+ ##### Annotation process
194
 
195
  The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
196
 
197
+ ##### Who are the annotators?
198
 
199
  The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
200
  They are not part of any target group of text simplification.
201
 
202
+ #### Personal and Sensitive Information
203
 
204
  No sensitive data.
205
 
206
+ ### Considerations for Using the Data
207
 
208
+ #### Social Impact of Dataset
209
 
210
  Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
211
 
212
+ #### Discussion of Biases
213
 
214
  no bias is known.
215
 
216
+ #### Other Known Limitations
217
 
218
  The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information.
219
 
220
+ ### Additional Information
221
 
222
+ #### Dataset Curators
223
 
224
  DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''.
225
 
226
+ #### Licensing Information
227
+
228
+ The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, and CC-BY-NC-ND-4. The corpus also include a "save_use_share" license, for these documents the data provider permitted us to share the data for research purposes.
229
 
230
+ #### Citation Information
231
 
 
232
 
233
+ ```
234
+ @inproceedings{stodden-etal-2023-deplain,
235
+ title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
236
+ author = "Stodden, Regina and
237
+ Momen, Omar and
238
+ Kallmeyer, Laura",
239
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
240
+ month = jul,
241
+ year = "2023",
242
+ address = "Toronto, Canada",
243
+ publisher = "Association for Computational Linguistics",
244
+ notes = "preprint: https://arxiv.org/abs/2305.18939",
245
+ }
246
+ ```
247
 
248
+ This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).