Datasets:
update readme
Browse files
README.md
CHANGED
@@ -45,13 +45,15 @@ task_ids: []
|
|
45 |
|
46 |
## Dataset Description
|
47 |
|
48 |
-
- **Point of Contact:** [
|
49 |
|
50 |
### Dataset Summary
|
51 |
|
|
|
52 |
|
53 |
### Supported Tasks and Leaderboards
|
54 |
|
|
|
55 |
|
56 |
### Languages
|
57 |
|
@@ -59,10 +61,21 @@ The dataset is in Catalan (`ca-CA`).
|
|
59 |
|
60 |
## Dataset Structure
|
61 |
|
|
|
62 |
|
63 |
### Data Instances
|
64 |
|
65 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
```
|
68 |
|
@@ -78,24 +91,31 @@ We created this corpus to contribute to the development of language models in Ca
|
|
78 |
|
79 |
### Source Data
|
80 |
|
|
|
|
|
81 |
#### Initial Data Collection and Normalization
|
82 |
|
83 |
-
The data was collected
|
84 |
|
85 |
#### Who are the source language producers?
|
86 |
|
87 |
-
The
|
88 |
|
89 |
### Annotations
|
90 |
|
|
|
|
|
|
|
91 |
|
92 |
#### Annotation process
|
93 |
|
94 |
-
|
|
|
|
|
95 |
|
96 |
#### Who are the annotators?
|
97 |
|
98 |
-
|
99 |
|
100 |
### Personal and Sensitive Information
|
101 |
|
@@ -111,7 +131,7 @@ We are aware that, since the data comes from social media, this will contain bia
|
|
111 |
|
112 |
### Other Known Limitations
|
113 |
|
114 |
-
|
115 |
|
116 |
## Additional Information
|
117 |
|
|
|
45 |
|
46 |
## Dataset Description
|
47 |
|
48 |
+
- **Point of Contact:** [aina@bsc.es](Blanca Calvo)
|
49 |
|
50 |
### Dataset Summary
|
51 |
|
52 |
+
The CaSET summary is a Catalan dataset of Tweets annotated with Emotions, Static Stance, and Dynamic Stance. The dataset contains 11k unique sentence on five polemical topics, grouped in 6k pairs of sentences, paired as original messages and answers to these messages.
|
53 |
|
54 |
### Supported Tasks and Leaderboards
|
55 |
|
56 |
+
This dataset can be used to train models for emotion detection, static stance detection, and dynamic stance detection.
|
57 |
|
58 |
### Languages
|
59 |
|
|
|
61 |
|
62 |
## Dataset Structure
|
63 |
|
64 |
+
Each instance in the dataset is a pair of original-answer messages, annotated with the relation between the two messages (the dynamic stance) and the topic of the messages. For each message there is the id to retrieve it with the Twitter API, the emotions identified in the message, and the relation between the message and the topic (static stance). The text fields have to be retrieved using the Twitter API.
|
65 |
|
66 |
### Data Instances
|
67 |
|
68 |
```
|
69 |
+
{"id_original": "1413960970066710533",
|
70 |
+
"id_answer": "1413968453690658816",
|
71 |
+
"original_text": "",
|
72 |
+
"answer_text": "",
|
73 |
+
"topic": "vaccines",
|
74 |
+
"dynamic_stance": "Disagree",
|
75 |
+
"original_stance": "FAVOUR",
|
76 |
+
"answer_stance": "AGAINST",
|
77 |
+
"original_emotion": ["distrust", "joy", "disgust"],
|
78 |
+
"answer_emotion": ["distrust"]}
|
79 |
|
80 |
```
|
81 |
|
|
|
91 |
|
92 |
### Source Data
|
93 |
|
94 |
+
The data was collected using the Twitter API by the Barcelona Supercomputing Center.
|
95 |
+
|
96 |
#### Initial Data Collection and Normalization
|
97 |
|
98 |
+
The data was collected based on a list of keywords related to the five topics included in the dataset: vaccines, rent regulation, surrogate pregnancy, airport expansion, and a TV show rigging.
|
99 |
|
100 |
#### Who are the source language producers?
|
101 |
|
102 |
+
The source language producers are user of Twitter.
|
103 |
|
104 |
### Annotations
|
105 |
|
106 |
+
Emotions are annotated in a multi-label fashion. The labels can be: Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, Distrust.
|
107 |
+
Static stance is annotated per message. The labels can be: FAVOUR, AGAINST, NEUTRAL, NA.
|
108 |
+
Dynamic stance is annotated per pair. The labels can be: Agree, Disagree, Elaborate, Query, Neutral, Unrelated, NA.
|
109 |
|
110 |
#### Annotation process
|
111 |
|
112 |
+
For emotions there were 3 annotators. The gold labels are an aggregation of all the labels annotated by the 3. The IAA calculated with Fleiss' Kappa per label was, on average, 45.38.
|
113 |
+
For static stance there were 2 annotators, in the cases of disagreement a third annotated chose the gold label. The overall Fleiss' Kappa between the 2 annotators is 82.71.
|
114 |
+
For dynamic stance there were 4 annotators. If at least 3 of the annotators disagreed, a fifth annotator chose the gold label. The overall Fleiss' Kappa between the 4 annotators was 56.51, and the average Fleiss' Kappa of the annotators with the gold labels is 85.17.
|
115 |
|
116 |
#### Who are the annotators?
|
117 |
|
118 |
+
All the annotators are native speakers of Catalan.
|
119 |
|
120 |
### Personal and Sensitive Information
|
121 |
|
|
|
131 |
|
132 |
### Other Known Limitations
|
133 |
|
134 |
+
The dataset has to be downloaded using the Twitter API, therefore some instances might be lost.
|
135 |
|
136 |
## Additional Information
|
137 |
|