Datasets:

Size:
n<1K
License:
idalr commited on
Commit
282733d
1 Parent(s): 22958d5

edit dataset card

Browse files
Files changed (1) hide show
  1. README.md +229 -90
README.md CHANGED
@@ -1,93 +1,232 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
3
  ---
4
- # An annotated corpus of argumentative microtexts
5
-
6
- The arg-microtexts corpus features 112 short argumentative texts. All texts
7
- were originally written in German and have been professionally translated to
8
- English.
9
-
10
- The texts with ids b001-b064 and k001-k031 have been collected in a controlled
11
- text generation experiment from 23 subjects discussing various controversial
12
- issues from [a fixed list](topics_triggers.md).
13
-
14
- The texts with ids d01-d23 have been written by Andreas Peldszus and were
15
- used mainly in teaching and testing students argumentative analysis.
16
-
17
- All texts are annotated with argumentation structures, following the scheme
18
- proposed in Peldszus & Stede (2013). For inter-annotator-agreement scores see
19
- Peldszus (2014). The (German) annotation guidelines are published in Peldszus, Warzecha, Stede (2016).
20
-
21
-
22
- ## DATA FORMAT (ARGUMENTATION GRAPH)
23
-
24
- This specifies the argumentation graphs following the
25
- annotation scheme described in
26
-
27
- Andreas Peldszus and Manfred Stede. From argument diagrams to argumentation
28
- mining in texts: a survey. International Journal of Cognitive Informatics
29
- and Natural Intelligence (IJCINI), 7(1):1–31, 2013.
30
-
31
-
32
- An argumentation graph is a directed graph spanning over text segments. The
33
- format distinguishes three different sorts of nodes: EDUs, ADUs & EDU-joints.
34
-
35
- - EDU: elementary discourse units
36
- The text is segmented into elementary discourse units, typically at a
37
- clause/sentence level. This segmentation can be the result of manually
38
- annotation or of automatic discourse segmenters.
39
-
40
- - ADU: argumentative discourse units
41
- Not every EDU is relevant in an argumentation. Also, the same claim might
42
- be stated multiple times in longer texts. An argumentative discourse unit
43
- represents a claim that stands for itself and is argumentatively relevant.
44
- It is thus grounded in one or more EDUs. EDU and ADUs are connected by
45
- segmentation edges. ADUs are associated with a dialectic role: They are
46
- either proponent or opponent nodes.
47
-
48
- - JOINT: a joint of two or more adjacent elementary discourse units
49
- When two adjacent EDUs are argumentatively relevant only when taken
50
- together, these EDUs are first connected with one joint EDU node by
51
- segmentation edges and then this joint node is connected to a corresponding
52
- ADU.
53
-
54
- ### edge type
55
- The edges representing arguments are those that connect ADUs. The scheme
56
- distinguishes between supporting and attacking relations. Supporting
57
- relations are normal support and support by example. Attacking relations are
58
- rebutting attacks (directed against another node, challenging the accept-
59
- ability of the corresponding claim) and undercutting attacks (directed
60
- against another relation, challenging the argumentative inference from the
61
- source to the target of the relation). Finally, additional premises of
62
- relations with more than one premise are represented by additional source
63
- relations.
64
-
65
- Values:
66
- - seg: segmentation edges (EDU->ADU, EDU->JOINT, JOINT->ADU)
67
- - sup: support (ADU->ADU)
68
- - exa: support by example (ADU->ADU)
69
- - add: additional source, for combined/convergent arguments with multiple premises (ADU->ADU)
70
- - reb: rebutting attack (ADU->ADU)
71
- - und: undercutting attack (ADU->Edge)
72
-
73
- ### adu type
74
- The argumentation can be thought of as a dialectical exchange between the
75
- role of the proponent (who is presenting and defending the central claim)
76
- and the role of the opponent (who is critically challenging the proponents
77
- claims). Each ADU is thus associated with one of these dialectic roles.
78
-
79
- Values:
80
- - pro: proponent
81
- - opp: opponent
82
-
83
- ### stance type
84
- Annotated texts typically discuss a controversial topic, i.e. an issue posed
85
- as a yes/no question. Example: "Should we make use of capital punishment?"
86
- The stance type specifies, which stance the author of this text takes
87
- towards this issue.
88
-
89
- Values:
90
- - pro: yes, in favour of the proposed issue
91
- - con: no, against the proposed issue
92
- - unclear: the position of the author is unclear
93
- - UNDEFINED
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ - de
6
+ pretty_name: argmicro
7
+ size_categories:
8
+ - n<1K
9
  ---
10
+ # Dataset Card for "ArgMicro"
11
+
12
+ ### Dataset Summary
13
+
14
+ The arg-microtexts corpus features 112 short argumentative texts. All texts were originally written in German and have been professionally translated to English.
15
+ Based on Freeman’s theory of the macro-structure of arguments ([1991](https://api.pageplace.de/preview/DT0400.9783110875843_A19822678/preview-9783110875843_A19822678.pdf); [2011](https://link.springer.com/book/10.1007/978-94-007-0357-5)) and Toulmin ([2003](https://www.cambridge.org/core/books/uses-of-argument/26CF801BC12004587B66778297D5567C))'s diagramming techniques, ArgMicro consists of `pro` (proponent) and `opp` (opponent) components and six types of relations: `seg` (segment), `add` (addition), `exa` (example), `reb` (rebut), `sup` (support), and `und` (undercut). It also introduced segment-based spans, which also contain non-argumentative parts, in order to cover the whole text.
16
+
17
+ ### Supported Tasks and Leaderboards
18
+
19
+ - **Tasks:** Structure Prediction, Relation Identification, Central Claim Identification, Role Classification, Function Classification
20
+ - **Leaderboards:** \[More Information Needed\]
21
+
22
+ ### Languages
23
+
24
+ German, with English translation (by a professional translator).
25
+
26
+ ## Dataset Structure
27
+
28
+ ### Data Instances
29
+
30
+ - **Size of downloaded dataset files:** 2.89 MB
31
+
32
+ ```
33
+ {
34
+ "id": "micro_b001",
35
+ "topic_id": "waste_separation",
36
+ "stance": 1,
37
+ "text": "Yes, it's annoying and cumbersome to separate your rubbish properly all the time. Three different bin bags stink away in the kitchen and have to be sorted into different wheelie bins. But still Germany produces way too much rubbish and too many resources are lost when what actually should be separated and recycled is burnt. We Berliners should take the chance and become pioneers in waste separation!",
38
+ "edus": {
39
+ "id": ["e1", "e2", "e3", "e4", "e5"],
40
+ "start": [0, 82, 184, 232, 326],
41
+ "end": [81, 183, 231, 325, 402]
42
+ },
43
+ "adus": {
44
+ "id": ["a1", "a2", "a3", "a4", "a5"],
45
+ "type": [0, 0, 1, 1, 1]
46
+ },
47
+ "edges": {
48
+ "id": ["c1", "c10", "c2", "c3", "c4", "c6", "c7", "c8", "c9"],
49
+ "src": ["a1", "e5", "a2", "a3", "a4", "e1", "e2", "e3", "e4"],
50
+ "trg": ["a5", "a5", "a1", "c1", "c3", "a1", "a2", "a3", "a4"],
51
+ "type": [4, 0, 1, 5, 3, 0, 0, 0, 0]
52
+ }
53
+ }
54
+ ```
55
+
56
+ ### Data Fields
57
+
58
+ - `id`: the instance `id` of the document, a `string` feature
59
+ - `topic_id`: the topic of the document, a `string` feature (see [list of topics](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/topics_triggers.md))
60
+ - `stance`: the index of stance on the topic, an `int` feature (see [stance labels](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L35))
61
+ - `text`: the text content of the document, a `string` feature
62
+ - `edus`: elementary discourse units; a segmented span of text (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L17-L20))
63
+ - `id`: the instance `id` of EDUs, a list of `string` feature
64
+ - `start`: the indices indicating the inclusive start of the spans, a list of `int` feature
65
+ - `end`: the indices indicating the exclusive end of the spans, a list of `int` feature
66
+ - `adus`: argumentative discourse units; argumentatively relevant claims built on EDUs (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L22-L28))
67
+ - `id`: the instance `id` of ADUs, a list of `string` feature
68
+ - `type`: the indices indicating the ADU type, a list of `int` feature (see [type list](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L36))
69
+ - `edges`: the relations between `adus` or `adus` and other `edges` (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L39-L47))
70
+ - `id`: the instance `id` of edges, a list of `string` feature
71
+ - `src`: the `id` of `adus` indicating the source element in a relation, a list of `string` feature
72
+ - `trg`: the `id` of `adus` or `edges` indicating the target element in a relation, a list of `string` feature
73
+ - `type`: the indices indicating the edge type, a list of `int` feature (see [type list](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L37))
74
+
75
+ ### Data Splits
76
+
77
+ | | train |
78
+ | -------------------------------------- | ----: |
79
+ | No. of instances | 112 |
80
+ | No. of sentences/instance (on average) | 5.1 |
81
+
82
+ ### Data Labels
83
+
84
+ #### Stance
85
+
86
+ | Stance | Count | Percentage |
87
+ | ----------- | ----: | ---------: |
88
+ | `pro` | 46 | 41.1 % |
89
+ | `con` | 42 | 37.5 % |
90
+ | `unclear` | 1 | 0.9 % |
91
+ | `UNDEFINED` | 23 | 20.5 % |
92
+
93
+ - `pro`: yes, in favour of the proposed issue
94
+ - `con`: no, against the proposed issue
95
+ - `unclear`: the position of the author is unclear
96
+ - `UNDEFINED`: no stance label assigned
97
+
98
+ See [stances types](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L74-L83).
99
+
100
+ #### ADUs
101
+
102
+ | ADUs | Count | Percentage |
103
+ | ----- | ----: | ---------: |
104
+ | `pro` | 451 | 78.3 % |
105
+ | `opp` | 125 | 21.7 % |
106
+
107
+ - `pro`: proponent, who presents and defends his claims
108
+ - `opp`: opponent, who critically questions the proponent in a regimented fashion (Peldszus, 2015, p.5)
109
+
110
+ #### Relations
111
+
112
+ | Relations | Count | Percentage |
113
+ | -------------- | ----: | ---------: |
114
+ | support: `sup` | 281 | 55.2 % |
115
+ | support: `exa` | 9 | 1.8 % |
116
+ | attack: `und` | 65 | 12.8 % |
117
+ | attack: `reb` | 110 | 21.6 % |
118
+ | other: `joint` | 44 | 8.6 % |
119
+
120
+ - `sup`: support (ADU->ADU)
121
+ - `exa`: support by example (ADU->ADU)
122
+ - `add`: additional source, for combined/convergent arguments with multiple premises, i.e., linked support, convergent support, serial support (ADU->ADU)
123
+ - `reb`: rebutting attack (ADU->ADU)
124
+ - definition: "targeting another node and thereby challenging its acceptability"
125
+ - `und`: undercutting attack (ADU->Edge)
126
+ - definition: "targeting an edge and thereby challenging the acceptability of the inference from the source to the target node"
127
+ ([P&S, 2016](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd); [EN annotation guideline](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesEnglish.pdf))
128
+ - `joint`: combines text segments if one does not express a complete proposition on its own, or if the author divides a clause/sentence into parts, using punctuation
129
+
130
+ See other corpus statistics in Peldszus (2015), Section 5.
131
+
132
+ ## Dataset Creation
133
+
134
+ This section is composed of information and excerpts provided in Peldszus ([2015](https://peldszus.github.io/files/eca2015-preprint.pdf)).
135
+
136
+ ### Curation Rationale
137
+
138
+ "Argumentation can, for theoretical purposes, be studied on the basis of carefully constructed examples that illustrate specific phenomena...\[We\] address this need by making a resource publicly available that is designed to fill a particular gap." (pp. 2-3)
139
+
140
+ ### Source Data
141
+
142
+ 23 texts were written by the authors as a “proof of concept” for the idea. These texts also have been used as examples in teaching and testing argumentation analysis with students.
143
+
144
+ 90 texts have been collected in a controlled text generation experiment, where normal competent language users wrote short texts of controlled linguistic and rhetoric complexity.
145
+
146
+ #### Initial Data Collection and Normalization
147
+
148
+ "Our contribution is a collection of 112 “microtexts” that have been written in response to trigger questions, mostly in the form of “Should one do X”. The texts are short but at the same time “complete” in that they provide a standpoint and a justification, by necessity in a fairly dense form." (p.2)
149
+
150
+ "The probands were asked to first gather a list with the pros and cons of the trigger question, then take stance for one side and argue for it on the basis of their reflection in a short argumentative text. Each text was to fulfill three requirements: It should be about five segments long; all segments should be argumentatively relevant, either formulating the main claim of the text, supporting the main claim or another segment, or attacking the main claim or another segment. Also, the probands were asked that at least one possible objection to the claim should be considered in the text. Finally, the text should be written in such a way that it would be understandable without having its trigger question as a headline." (p.3)
151
+
152
+ "\[A\]ll texts have been corrected for spelling and grammar errors...Their segmentation was corrected when necessary...some modifications in the remaining segments to maintain text coherence, which we made as minimal as possible." (p.4)
153
+
154
+ "We thus constrained the translation to preserve the segmentation of the text on the one hand (effectively ruling out phrasal translations of clause-type segments) and to preserve its linearization on the other hand (disallowing changes to the order of appearance of arguments)." (p.5)
155
+
156
+ #### Who are the source language producers?
157
+
158
+ The texts with ids b001-b064 and k001-k031 have been collected in a controlled text generation experiment from 23 subjects discussing various controversial issues from a fixed list. All probands were native speakers of
159
+ German, of varying age, education and profession.
160
+
161
+ The texts with ids d01-d23 have been written by Andreas Peldszus, the author.
162
+
163
+ ### Annotations
164
+
165
+ #### Annotation process
166
+
167
+ All texts are annotated with argumentation structures, following the scheme proposed in Peldszus & Stede ([2013](https://www.ling.uni-potsdam.de/~peldszus/ijcini2013-preprint.pdf)). For inter-annotator-agreement scores see Peldszus (2014). The (German) annotation guidelines are published in Peldszus, Warzecha, Stede (2016). See the annotation guidelines ([de](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesGerman.pdf), [en](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesEnglish.pdf)), and the [annotation schemes](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd).
168
+
169
+ "\[T\]he markup of argumentation structures in the full corpus was done by one expert annotator. All annotations have been checked, controversial instances have been discussed in a reconciliation phase by two or more expert annotators...The annotation of the corpus was originally done manually on paper. In follow-up annotations, we used GraPAT ([Sonntag & Stede, 2014](http://www.lrec-conf.org/proceedings/lrec2014/pdf/824_Paper.pdf))." (p.7)
170
+
171
+ #### Who are the annotators?
172
+
173
+ \[More Information Needed\]
174
+
175
+ ### Personal and Sensitive Information
176
+
177
+ \[More Information Needed\]
178
+
179
+ ## Considerations for Using the Data
180
+
181
+ ### Social Impact of Dataset
182
+
183
+ "Automatic argumentation recognition has many possible applications, including improving document summarization (Teufel and Moens, 2002), retrieval capabilities of legal databases (Palau and Moens, 2011), opinion mining for commercial purposes, or also as a tool for assessing public
184
+ opinion on political questions.
185
+
186
+ "...\[W\]e suggest there is yet one resource missing that could facilitate the development of automatic argumentation recognition systems: Short texts with explicit argumentation, little argumentatively irrelevant material, less rhetorical gimmicks (or even deception), in clean written language."
187
+ (Peldszus, [2014](https://aclanthology.org/W14-2112.pdf), p. 88)
188
+
189
+ ### Discussion of Biases
190
+
191
+ \[More Information Needed\]
192
+
193
+ ### Other Known Limitations
194
+
195
+ \[More Information Needed\]
196
+
197
+ ## Additional Information
198
+
199
+ ### Dataset Curators
200
+
201
+ \[More Information Needed\]
202
+
203
+ ### Licensing Information
204
+
205
+ The arg-microtexts corpus is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. (see [license agreement](https://creativecommons.org/licenses/by-nc-sa/4.0/))
206
+
207
+ ### Citation Information
208
+
209
+ ```
210
+ @inproceedings{peldszus2015annotated,
211
+ title={An annotated corpus of argumentative microtexts},
212
+ author={Peldszus, Andreas and Stede, Manfred},
213
+ booktitle={Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon},
214
+ volume={2},
215
+ pages={801--815},
216
+ year={2015}
217
+ }
218
+ ```
219
+
220
+ ```
221
+ @inproceedings{peldszus2014towards,
222
+ title={Towards segment-based recognition of argumentation structure in short texts},
223
+ author={Peldszus, Andreas},
224
+ booktitle={Proceedings of the First Workshop on Argumentation Mining},
225
+ pages={88--97},
226
+ year={2014}
227
+ }
228
+ ```
229
+
230
+ ### Contributions
231
+
232
+ Thanks to [@idalr](https://github.com/idalr) for adding this dataset.