ArneBinder
commited on
Commit
•
8e4d82e
1
Parent(s):
45cf7a6
add image with example (#1)
Browse files- add image with example (b5db26b7b2452705f1d7bcf4017d8eb565b4b25e)
README.md
CHANGED
@@ -1,14 +1,15 @@
|
|
1 |
---
|
2 |
language:
|
3 |
-
- en
|
4 |
size_categories:
|
5 |
-
- n<1K
|
6 |
---
|
7 |
-
|
|
|
8 |
|
9 |
### Dataset Summary
|
10 |
|
11 |
-
CDCP (a.k.a. *Cornell eRulemaking Corpus*; [Park and Cardie, 2018](https://aclanthology.org/L18-1257.pdf)) consists of 731 user comments from an eRulemaking platform in the English language. There are five types of components (`
|
12 |
|
13 |
### Supported Tasks and Leaderboards
|
14 |
|
@@ -55,11 +56,42 @@ The language in the dataset is English (AmE).
|
|
55 |
|
56 |
### Data Splits
|
57 |
|
58 |
-
|
|
59 |
-
|
|
60 |
-
| No. of instances
|
61 |
-
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
## Dataset Creation
|
65 |
|
@@ -111,7 +143,7 @@ General public participants, implying American citizens.
|
|
111 |
|
112 |
### Discussion of Biases
|
113 |
|
114 |
-
About 45% of the elementary units are `
|
115 |
|
116 |
### Other Known Limitations
|
117 |
|
@@ -140,4 +172,4 @@ About 45% of the elementary units are `VALUE` type. A significant portion, rough
|
|
140 |
|
141 |
### Contributions
|
142 |
|
143 |
-
Thanks to [@idalr](https://github.com/idalr) for adding this dataset.
|
|
|
1 |
---
|
2 |
language:
|
3 |
+
- en
|
4 |
size_categories:
|
5 |
+
- n<1K
|
6 |
---
|
7 |
+
|
8 |
+
# Dataset Card for "cdcp"
|
9 |
|
10 |
### Dataset Summary
|
11 |
|
12 |
+
CDCP (a.k.a. *Cornell eRulemaking Corpus*; [Park and Cardie, 2018](https://aclanthology.org/L18-1257.pdf)) consists of 731 user comments from an eRulemaking platform in the English language. There are five types of components (`fact`, `testimony`, `reference`, `value`, and `policy`) and two types of supporting relations (`reason` and `evidence`) are annotated on the basis of the study by Park et al. (2015). The resulting dataset contains 4931 elementary unit and 1221 support relation annotations (pp. 1623-1624). The spans are segmented into elementary units with a proposition consisting of a sentence or a clause, as well as a few non-argumentative units (Morio et al., 2022, p. 642).
|
13 |
|
14 |
### Supported Tasks and Leaderboards
|
15 |
|
|
|
56 |
|
57 |
### Data Splits
|
58 |
|
59 |
+
| | train | test |
|
60 |
+
| ---------------- | ----: | ---: |
|
61 |
+
| No. of instances | 581 | 150 |
|
62 |
+
|
63 |
+
### Label Description and Statistics
|
64 |
+
|
65 |
+
In this section, we report our own statistics of the corpus. However, there are yet discrepancies between our report, the author's report (see Park & Cardie, 2017, p. 1627, Table 2), and Morio et al. (2022)'s, who also utilized this corpus.
|
66 |
+
|
67 |
+
#### Components
|
68 |
+
|
69 |
+
| Components | train | test | total | percentage |
|
70 |
+
| ------------------------------------------------------------------- | ----------------------------------: | --------------------------------: | -----------------------------------: | -------------------------------------------: |
|
71 |
+
| `fact`<br/> `testimony`<br/> `reference`<br/> `value`<br/> `policy` | 654<br/>873<br/>31<br/>1686<br/>662 | 132<br/>244<br/>1<br/>496<br/>153 | 786<br/>1117<br/>32<br/>2182<br/>815 | 15.9%<br/>22.6%<br/>0.6%<br/>44.2%<br/>16.5% |
|
72 |
+
|
73 |
+
- `value`: "judgments without making specific claims about what should be done"
|
74 |
+
- `fact`: "expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations"
|
75 |
+
- `testimony`: "an objective proposition about the author’s personal state or experience"; "often practically impossible to provide objective evidence in online commenting setting"
|
76 |
+
- `policy`: "a specific course of action to be taken"; "typically contains modal verbs like “should” and “ought to.”"
|
77 |
+
- `reference`: "a source of objective evidence"
|
78 |
+
|
79 |
+
(Park & Cardie, 2018, p. 1625)
|
80 |
+
|
81 |
+
#### Relations
|
82 |
+
|
83 |
+
| Relations | train | test | total | percentage |
|
84 |
+
| :----------------------- | ----------: | ---------: | ----------: | -------------: |
|
85 |
+
| `reason`<br/> `evidence` | 1055<br/>47 | 298<br/>26 | 1353<br/>73 | 94.9%<br/>5.1% |
|
86 |
+
|
87 |
+
- `reason`: "X (source) is `reason` for a proposition Y (target; `policy`, `value`, `fact`, `testimony`) if X provides rationale for Y"
|
88 |
+
- `evidence`: "X (`testimony`, `fact`, `reference`) is `evidence` for a proposition Y if X proves whether proposition Y is true or not"
|
89 |
+
|
90 |
+
(Park & Cardie, 2018, pp. 1625-1626)
|
91 |
+
|
92 |
+
#### Examples
|
93 |
+
|
94 |
+
![Examples](img/cdcp-sam.png)
|
95 |
|
96 |
## Dataset Creation
|
97 |
|
|
|
143 |
|
144 |
### Discussion of Biases
|
145 |
|
146 |
+
About 45% of the elementary units are `value` type. A significant portion, roughly 75%, of support relation annotations are between adjacent elementary units. While commenters certainly tend to provide reasons immediately after the proposition to be supported, it is also easier for annotators to identify support relations in proximity. Thus, support relations in the wild may be not as skewed toward those between adjacent elementary units. (pp. 1626-1627)
|
147 |
|
148 |
### Other Known Limitations
|
149 |
|
|
|
172 |
|
173 |
### Contributions
|
174 |
|
175 |
+
Thanks to [@idalr](https://github.com/idalr) for adding this dataset.
|