kera7 commited on
Commit
7ffc2b8
·
verified ·
1 Parent(s): 3b1db94

Added dataset card template, basic descriptions

Browse files
Files changed (1) hide show
  1. README.md +119 -1
README.md CHANGED
@@ -9,4 +9,122 @@ language:
9
  pretty_name: Wikipedia Deletion Discussions with stance and policy labels
10
  size_categories:
11
  - 100K<n<1M
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pretty_name: Wikipedia Deletion Discussions with stance and policy labels
10
  size_categories:
11
  - 100K<n<1M
12
+ ---
13
+ # Dataset Card for Wiki-Stance
14
+
15
+ ## Dataset Details
16
+
17
+ ### Dataset Description
18
+
19
+ This is the dataset Wiki-Stance introduced in EMNLP 2023 paper "[Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions](https://aclanthology.org/2023.emnlp-main.361/)"
20
+
21
+ A pre-print version of the paper can be found here: [Arxiv](https://arxiv.org/abs/2310.05779)
22
+
23
+ ### Dataset Sources
24
+
25
+ - **Repository:** https://github.com/copenlu/wiki-stance
26
+ - **Paper:** https://aclanthology.org/2023.emnlp-main.361/
27
+
28
+ ### Column name descriptions:
29
+
30
+ - *title* - Title of the Wikipedia page under consideration for deletion
31
+ - *username* - Wikipedia username of the author of the comment
32
+ - *timestamp* - Timestamp for the coment
33
+ - *decision* - Stance label for the comment in the original language
34
+ - *comment* - Text of the deletion discussion comment by a Wikipedia editor
35
+ - *topic* - Topic for the stance task (Usually "Deletion of [Title]")
36
+ - *en_label* - English translation of the Decision
37
+ - *policy* - Wikipedia policy code relevant for the comment
38
+ - *policy_title* - Title of Wikipedia policy relevant for the comment
39
+ - *policy_index* - Index of the Wikipedia policy (specific to our dataset)
40
+
41
+
42
+ ## Uses
43
+
44
+ <!-- Address questions around how the dataset is intended to be used. -->
45
+
46
+ ### Direct Use
47
+
48
+ <!-- This section describes suitable use cases for the dataset. -->
49
+
50
+ [More Information Needed]
51
+
52
+ ## Dataset Creation
53
+
54
+ ### Curation Rationale
55
+
56
+ <!-- Motivation for the creation of this dataset. -->
57
+
58
+ [More Information Needed]
59
+
60
+ ### Source Data
61
+
62
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
63
+
64
+ #### Data Collection and Processing
65
+
66
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
67
+
68
+ [More Information Needed]
69
+
70
+ #### Who are the source data producers?
71
+
72
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
73
+
74
+ [More Information Needed]
75
+
76
+ ### Annotations [optional]
77
+
78
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
79
+
80
+ #### Annotation process
81
+
82
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
83
+
84
+ [More Information Needed]
85
+
86
+ #### Who are the annotators?
87
+
88
+ <!-- This section describes the people or systems who created the annotations. -->
89
+
90
+ [More Information Needed]
91
+
92
+ #### Personal and Sensitive Information
93
+
94
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
95
+
96
+ [More Information Needed]
97
+
98
+ ## Bias, Risks, and Limitations
99
+
100
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
101
+
102
+ [More Information Needed]
103
+
104
+ ### Recommendations
105
+
106
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
107
+
108
+ ## Citation
109
+
110
+ If you find our dataset helpful, kindly refer to us in your work using the following citation:
111
+ ```
112
+ @inproceedings{kaffee-etal-2023-article,
113
+ title = "Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual {W}ikipedia Editor Discussions",
114
+ author = "Kaffee, Lucie-Aim{\'e}e and
115
+ Arora, Arnav and
116
+ Augenstein, Isabelle",
117
+ editor = "Bouamor, Houda and
118
+ Pino, Juan and
119
+ Bali, Kalika",
120
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
121
+ month = dec,
122
+ year = "2023",
123
+ address = "Singapore",
124
+ publisher = "Association for Computational Linguistics",
125
+ url = "https://aclanthology.org/2023.emnlp-main.361",
126
+ doi = "10.18653/v1/2023.emnlp-main.361",
127
+ pages = "5891--5909",
128
+ abstract = "The moderation of content on online platforms is usually non-transparent. On Wikipedia, however, this discussion is carried out publicly and editors are encouraged to use the content moderation policies as explanations for making moderation decisions. Currently, only a few comments explicitly mention those policies {--} 20{\%} of the English ones, but as few as 2{\%} of the German and Turkish comments. To aid in this process of understanding how content is moderated, we construct a novel multilingual dataset of Wikipedia editor discussions along with their reasoning in three languages. The dataset contains the stances of the editors (keep, delete, merge, comment), along with the stated reason, and a content moderation policy, for each edit decision. We demonstrate that stance and corresponding reason (policy) can be predicted jointly with a high degree of accuracy, adding transparency to the decision-making process. We release both our joint prediction models and the multilingual content moderation dataset for further research on automated transparent content moderation.",
129
+ }
130
+ ```