Datasets:

Languages:
English
Size:
n<1K
idalr commited on
Commit
c951af8
1 Parent(s): a44e683

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for "CDCP"
2
+
3
+ ### Dataset Summary
4
+
5
+ CDCP (a.k.a. *Cornell eRulemaking Corpus*; [Park and Cardie, 2018](https://aclanthology.org/L18-1257.pdf)) consists of 731 user comments from an eRulemaking platform in the English language. There are five types of components (`Fact`, `Testimony`, `Reference`, `Value`, and `Policy`) and two types of supporting relations (`Reason` and `Evidence`) are annotated on the basis of the study by Park et al. (2015). The resulting dataset contains 4931 elementary unit and 1221 support relation annotations. (pp. 1623-1624)
6
+
7
+ ### Supported Tasks and Leaderboards
8
+
9
+ - **Tasks:** Argument Mining, Link Prediction, Component Classification, Relation Classification
10
+ - **Leaderboards:** https://paperswithcode.com/dataset/cdcp
11
+
12
+ ### Languages
13
+
14
+ The language in the dataset is English (AmE).
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ - **Size of downloaded dataset files:** 5.37 MB
21
+
22
+ ```
23
+ {
24
+ 'id': "00195",
25
+ 'text': "State and local court rules sometimes make default judgments much more likely. For example, when a person who allegedly owes a debt is told to come to court on a work day, they may be forced to choose between a default judgment and their job. I urge the CFPB to find practices that involve scheduling hearings at inconvenient times unfair, deceptive, and abusive, or inconsistent with 1692i",
26
+ 'proposition': {
27
+ "start": [0, 78, 242],
28
+ "end": [78, 242, 391],
29
+ "label": [4, 4, 1],
30
+ "url": ["", "", ""],
31
+ },
32
+ 'relations': {"head": [0, 2], "tail": [1, 0], "label": [1, 1]},
33
+ }
34
+ ```
35
+
36
+ ### Data Fields
37
+
38
+ - `id`: the instance id of the text, a `string` feature
39
+ - `text`: the text (with URLs marked as `__URL__`), a `string` feature
40
+ - `proposition`: the annotation list of spans with labels and URL (if applicable), a `dictionary` feature
41
+ - `start`: the indices indicating the inclusive start of the spans, a `list` of `int` feature
42
+ - `end`: the indices indicating the exclusive end of the spans, a `list` of `int` feature
43
+ - `label`: the indices indicating the span type, a `list` of `int` feature (see [label list](https://huggingface.co/datasets/DFKI-SLT/cdcp/blob/main/cdcp.py#L40))
44
+ - `urls`: the URLs link with corresponding indices to each proposition, a `list` of `str` feature
45
+ - `relation`: the relation between labeled spans with relation labels, a `dictionary` feature
46
+ - `head`: the indices indicating the first element in a relation, a `list` of `int` feature
47
+ - `tail`: the indices indicating the second element in a relation, a `list` of `int` feature
48
+ - `label`: the indices indicating the relation type in a relation, a `list` of `int` feature (see [label list](https://huggingface.co/datasets/DFKI-SLT/cdcp/blob/main/cdcp.py#L41))
49
+
50
+ ### Data Splits
51
+
52
+ | | train | test |
53
+ | ------------------------------------------------------------------------------------------------ | ---------------------------------------: | -------------------------------------: |
54
+ | No. of instances | 581 | 150 |
55
+ | No. of span labels<br/>- `Fact`<br/>- `Testimony`<br/>- `Reference`<br/>- `Value`<br/>- `Policy` | <br/>654<br/>873<br/>31<br/>1686<br/>662 | <br/>132<br/>244<br/>1<br/>496<br/>153 |
56
+ | No. of relation labels<br/>- `reason`<br/>- `evidence` | <br/>1055<br/>47 | <br/>298<br/>26 |
57
+
58
+ ## Dataset Creation
59
+
60
+ ### Curation Rationale
61
+
62
+ "eRulemaking is a means for government agencies to directly reach citizens to solicit their opinions and experiences regarding newly proposed rules. The effort, however, is partly hampered by citizens’ comments that lack reasoning and evidence, which are largely ignored since government agencies are unable to evaluate the validity and strength." (p. 1623)
63
+
64
+ "It will be a valuable resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify ways in which a given argument can be improved with respect to its evaluability." (p. 1624)
65
+
66
+ ### Source Data
67
+
68
+ eRulemaking comments (see [eRulemaking](https://www.gsa.gov/about-us/organization/federal-acquisition-service/technology-transformation-services/erulemaking))
69
+
70
+ #### Initial Data Collection and Normalization
71
+
72
+ "Annotated 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB) posted on www.regulationroom.org." (p. 1624)
73
+
74
+ #### Who are the source language producers?
75
+
76
+ General public participants, implying American citizens.
77
+
78
+ "According to a voluntary user survey that asked the commenters to self-identify themselves, about 64% of the comments came from consumers, 22% from debt collectors, and the remainder from others, such as consumer advocates and counsellor organizations." (p. 1624)
79
+
80
+ ### Annotations
81
+
82
+ #### Annotation process
83
+
84
+ "The annotators annotated the elementary units and support relations defined in the argumentation model proposed by [Park et al. (2015)](https://dl.acm.org/doi/10.1145/2746090.2746118)."
85
+
86
+ "Each user comment was annotated by two annotators, who independently determined the types of elementary units and support relations among them using the GATE annotation tool (Cunningham et al., 2011). A third annotator manually resolved the conflicts to produce the final dataset."
87
+
88
+ "Inter-annotator agreement between 2 annotators is measured with Krippendorf’s α with respect to elementary unit type (α=64.8%) and support relations (α=44.1%); IDs of supported elementary units are treated as labels for the supporting elementary units."
89
+
90
+ (p. 1626)
91
+
92
+ #### Who are the annotators?
93
+
94
+ \[More Information Needed\]
95
+
96
+ ### Personal and Sensitive Information
97
+
98
+ \[More Information Needed\]
99
+
100
+ ## Considerations for Using the Data
101
+
102
+ ### Social Impact of Dataset
103
+
104
+ "Immediate applications include automatically ranking arguments based on their evaluability for a (crude) identification of read-worthy comments and providing real-time feedback to writers, specifying which types of support for which propositions can be added to construct better-formed arguments." (p. 1624)
105
+
106
+ ### Discussion of Biases
107
+
108
+ About 45% of the elementary units are `VALUE` type. A significant portion, roughly 75%, of support relation annotations are between adjacent elementary units. While commenters certainly tend to provide reasons immediately after the proposition to be supported, it is also easier for annotators to identify support relations in proximity. Thus, support relations in the wild may be not as skewed toward those between adjacent elementary units. (pp. 1626-1627)
109
+
110
+ ### Other Known Limitations
111
+
112
+ \[More Information Needed\]
113
+
114
+ ## Additional Information
115
+
116
+ ### Dataset Curators
117
+
118
+ \[More Information Needed\]
119
+
120
+ ### Licensing Information
121
+
122
+ \[More Information Needed\]
123
+
124
+ ### Citation Information
125
+
126
+ ```
127
+ @inproceedings{park2018corpus,
128
+ title={A corpus of erulemaking user comments for measuring evaluability of arguments},
129
+ author={Park, Joonsuk and Cardie, Claire},
130
+ booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
131
+ year={2018}
132
+ }
133
+ ```
134
+
135
+ ### Contributions
136
+
137
+ Thanks to [@idalr](https://github.com/idalr) for adding this dataset.