The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for "cdcp"
Dataset Summary
CDCP (a.k.a. Cornell eRulemaking Corpus; Park and Cardie, 2018) consists of 731 user comments from an eRulemaking platform in the English language. There are five types of components (fact
, testimony
, reference
, value
, and policy
) and two types of supporting relations (reason
and evidence
) are annotated on the basis of the study by Park et al. (2015). The resulting dataset contains 4931 elementary unit and 1221 support relation annotations (pp. 1623-1624). The spans are segmented into elementary units with a proposition consisting of a sentence or a clause, as well as a few non-argumentative units (Morio et al., 2022, p. 642).
IMPORTANT: In the original data, there is one entry (id=00411) that has the same base text as another (id=00410), but has just a subset of the annotations. This dataset script will not load the broken entry (id=00411), so the train data will contain one entry less than reported in Park and Cardie, 2018.
Supported Tasks and Leaderboards
- Tasks: Argument Mining, Link Prediction, Component Classification, Relation Classification
- Leaderboards: https://paperswithcode.com/dataset/cdcp
Languages
The language in the dataset is English (AmE).
Dataset Structure
Data Instances
- Size of downloaded dataset files: 5.37 MB
{
'id': "00195",
'text': "State and local court rules sometimes make default judgments much more likely. For example, when a person who allegedly owes a debt is told to come to court on a work day, they may be forced to choose between a default judgment and their job. I urge the CFPB to find practices that involve scheduling hearings at inconvenient times unfair, deceptive, and abusive, or inconsistent with 1692i",
'proposition': {
"start": [0, 78, 242],
"end": [78, 242, 391],
"label": [4, 4, 1],
"url": ["", "", ""],
},
'relations': {"head": [0, 2], "tail": [1, 0], "label": [1, 1]},
}
Data Fields
id
: the instance id of the text, astring
featuretext
: the text (with URLs marked as__URL__
), astring
featureproposition
: the annotation list of spans with labels and URL (if applicable), adictionary
featurestart
: the indices indicating the inclusive start of the spans, alist
ofint
featureend
: the indices indicating the exclusive end of the spans, alist
ofint
featurelabel
: the indices indicating the span type, alist
ofint
feature (see label list)urls
: the URLs link with corresponding indices to each proposition, alist
ofstr
feature
relation
: the relation between labeled spans with relation labels, adictionary
featurehead
: the indices indicating the first element in a relation, alist
ofint
featuretail
: the indices indicating the second element in a relation, alist
ofint
featurelabel
: the indices indicating the relation type in a relation, alist
ofint
feature (see label list)
Data Splits
train | test | |
---|---|---|
No. of instances | 580 | 150 |
Label Description and Statistics
In this section, we report our own statistics of the corpus. However, there are yet discrepancies between our report, the author's report (see Park & Cardie, 2017, p. 1627, Table 2), and Morio et al. (2022)'s, who also utilized this corpus.
Components
Components | train | test | total | percentage |
---|---|---|---|---|
fact testimony reference value policy |
653 873 31 1686 658 |
132 244 1 496 153 |
785 1117 32 2182 811 |
15.9% 22.7% 0.6% 44.3% 16.5% |
value
: "judgments without making specific claims about what should be done"fact
: "expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations"testimony
: "an objective proposition about the author’s personal state or experience"; "often practically impossible to provide objective evidence in online commenting setting"policy
: "a specific course of action to be taken"; "typically contains modal verbs like “should” and “ought to.”"reference
: "a source of objective evidence"
(Park & Cardie, 2018, p. 1625)
Relations
Relations | train | test | total | percentage |
---|---|---|---|---|
reason evidence |
1055 47 |
298 26 |
1353 73 |
94.9% 5.1% |
reason
: "X (source) isreason
for a proposition Y (target;policy
,value
,fact
,testimony
) if X provides rationale for Y"evidence
: "X (testimony
,fact
,reference
) isevidence
for a proposition Y if X proves whether proposition Y is true or not"
(Park & Cardie, 2018, pp. 1625-1626)
Examples
Dataset Creation
Curation Rationale
"eRulemaking is a means for government agencies to directly reach citizens to solicit their opinions and experiences regarding newly proposed rules. The effort, however, is partly hampered by citizens’ comments that lack reasoning and evidence, which are largely ignored since government agencies are unable to evaluate the validity and strength." (p. 1623)
"It will be a valuable resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify ways in which a given argument can be improved with respect to its evaluability." (p. 1624)
Source Data
eRulemaking comments (see eRulemaking)
Initial Data Collection and Normalization
"Annotated 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB) posted on www.regulationroom.org." (p. 1624)
Who are the source language producers?
General public participants, implying American citizens.
"According to a voluntary user survey that asked the commenters to self-identify themselves, about 64% of the comments came from consumers, 22% from debt collectors, and the remainder from others, such as consumer advocates and counsellor organizations." (p. 1624)
Annotations
Annotation process
"The annotators annotated the elementary units and support relations defined in the argumentation model proposed by Park et al. (2015)."
"Each user comment was annotated by two annotators, who independently determined the types of elementary units and support relations among them using the GATE annotation tool (Cunningham et al., 2011). A third annotator manually resolved the conflicts to produce the final dataset."
"Inter-annotator agreement between 2 annotators is measured with Krippendorf’s α with respect to elementary unit type (α=64.8%) and support relations (α=44.1%); IDs of supported elementary units are treated as labels for the supporting elementary units."
(p. 1626)
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
"Immediate applications include automatically ranking arguments based on their evaluability for a (crude) identification of read-worthy comments and providing real-time feedback to writers, specifying which types of support for which propositions can be added to construct better-formed arguments." (p. 1624)
Discussion of Biases
About 45% of the elementary units are value
type. A significant portion, roughly 75%, of support relation annotations are between adjacent elementary units. While commenters certainly tend to provide reasons immediately after the proposition to be supported, it is also easier for annotators to identify support relations in proximity. Thus, support relations in the wild may be not as skewed toward those between adjacent elementary units. (pp. 1626-1627)
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
@inproceedings{park2018corpus,
title={A corpus of erulemaking user comments for measuring evaluability of arguments},
author={Park, Joonsuk and Cardie, Claire},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
Contributions
Thanks to @idalr for adding this dataset.
- Downloads last month
- 151