Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
multi-label-classification
Languages:
Polish
Size:
10K - 100K
License:
Update README.md
#1
by
KoconJan
- opened
README.md
CHANGED
@@ -53,26 +53,27 @@ language:
|
|
53 |
tags:
|
54 |
- emotion
|
55 |
- sentence-classification
|
|
|
56 |
task_ids:
|
57 |
- multi-label-classification
|
58 |
-
license:
|
59 |
---
|
60 |
|
61 |
## Dataset
|
62 |
-
The dataset is made up of consumer reviews written in Polish. Those reviews belong to four domains: hotels, medicine, products, and
|
63 |
|
64 |
-
For more information about this dataset see
|
65 |
|
66 |
### Training set
|
67 |
-
Training data consists of 776 reviews containing 6393 sentences
|
68 |
|
69 |
### Test sets
|
70 |
-
Two test sets
|
71 |
|
72 |
### Dataset format
|
73 |
The datasets are stored in three directories (training and two test sets). All datasets have the same format.
|
74 |
|
75 |
-
Input rows contain ordered sentences of reviews. Each review ends with a sentence made out of only the symbol #. This sentence annotation corresponds to the annotation of the whole review and is not a sentence annotation. This sentence is not a part of the original review and should not be treated as such
|
76 |
|
77 |
Example:
|
78 |
|
|
|
53 |
tags:
|
54 |
- emotion
|
55 |
- sentence-classification
|
56 |
+
- emotion recognition
|
57 |
task_ids:
|
58 |
- multi-label-classification
|
59 |
+
license: cc-by-4.0
|
60 |
---
|
61 |
|
62 |
## Dataset
|
63 |
+
The dataset is made up of consumer reviews written in Polish. Those reviews belong to four domains: hotels, medicine, products, and university. This collection also contains non-opinion informative texts belonging to the same domains (meaning they are mostly neutral). Each sentence, as well as all the reviews as a whole, are annotated with emotions from the Plutchnik's wheel of emotions (joy, trust, anticipation, surprise, fear, sadness, disgust, anger), as well as the perceived sentiment (positive, negative, neutral), with ambivalent sentiment being labeled using both positive and negative labels. The dataset was annotated by six people who did not see each other's decisions. These annotations were aggregated by selecting labels annotated by at least 2 out of 6 people, meaning controversial texts and sentences can be annotated with opposing emotions. While each sentence has its own annotation, they were created in the context of the whole review.
|
64 |
|
65 |
+
For more information about this dataset, see references [1](#ref-1) and [2](#ref-2).
|
66 |
|
67 |
### Training set
|
68 |
+
Training data consists of 776 reviews containing 6393 sentences randomly selected from the whole dataset. The split was done on the level of whole reviews, meaning no reviews are split between sets.
|
69 |
|
70 |
### Test sets
|
71 |
+
Two test sets contain 167 reviews, each containing 1234 and 1264 sentence annotations.
|
72 |
|
73 |
### Dataset format
|
74 |
The datasets are stored in three directories (training and two test sets). All datasets have the same format.
|
75 |
|
76 |
+
Input rows contain ordered sentences of reviews. Each review ends with a sentence made out of only the symbol #. This sentence annotation corresponds to the annotation of the whole review and is not a sentence annotation. This sentence is not a part of the original review and should not be treated as such. It only marks the end of the current review and the row that contains the corresponding review annotation. The next row after such a sentence corresponds to the first sentence of a different review.
|
77 |
|
78 |
Example:
|
79 |
|