Commit
·
f39bc45
1
Parent(s):
bf68174
Update README file.
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ size_categories:
|
|
14 |
## Summary
|
15 |
|
16 |
This dataset contains user comments from an Austrian online newspaper.
|
17 |
-
The comments have been annotated by
|
18 |
as to how strong sexism/mysogyny is present in the comment.
|
19 |
|
20 |
For each comment, the code of the annotator and the label assigned is given for all
|
@@ -22,7 +22,10 @@ annotators which have annotated that comment. Labels represent the severity of
|
|
22 |
sexism/misogyny present in the comment from 0 (none),
|
23 |
1 (mild), 2 (present), 3 (strong) to 4 (severe).
|
24 |
|
25 |
-
The dataset
|
|
|
|
|
|
|
26 |
|
27 |
A unique propery of this corpus is that it contains only a small portion of sexist/misogynyst remarks
|
28 |
which use strong language, curse-words or otherwise blatantly offending terms, a large number
|
@@ -40,37 +43,26 @@ All comments are in a single JSONL file, one comment per line with the following
|
|
40 |
* `round`: comments were annotated in rounds of 100, this gives the round identifier as a string containing a two-digit round number, e.g. "00" or "13"
|
41 |
* `source`: the code which identifies how comments which are likely negative and positive examples where selected for the annotation round
|
42 |
|
43 |
-
### Annotator codes - the following table shows the possible annotator codes and the number of comments annotated by each of them
|
44 |
-
|
45 |
-
| Annotator code | Annotations |
|
46 |
-
| -- | --: |
|
47 |
-
| A1m | 1298 |
|
48 |
-
| A2f | 7995 |
|
49 |
-
| A3m | 1699 |
|
50 |
-
| A4m | 1898 |
|
51 |
-
| A5f | 2097 |
|
52 |
-
| A7f | 1698 |
|
53 |
-
| A8f | 2498 |
|
54 |
-
| A9f | 3897 |
|
55 |
-
|
56 |
-
The suffix of the annotator code identifies the self-declared gender (f=female, m=male) of the annotator.
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
|
|
|
61 |
|
62 |
-
| Comment source | number of comments |
|
63 |
-
| --- | ---: |
|
64 |
-
| forum1-sexist | 1400
|
65 |
-
| meld02/meld02neg | 1000
|
66 |
-
| meld02/neg01 | 999
|
67 |
-
| meld04/meld04neg | 899
|
68 |
-
| forum2 | 800
|
69 |
-
| forum1 | 799
|
70 |
-
| meld02CLpos/meld02CLneg | 700
|
71 |
-
| meld01/meld01neg | 698
|
72 |
-
| meld03/meld03neg | 500
|
73 |
-
| meld01/neg01 | 200
|
74 |
|
75 |
|
76 |
## Language
|
@@ -92,4 +84,16 @@ See the detailled [datasheet](./datasheet.md)
|
|
92 |
|
93 |
## Papers
|
94 |
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## Summary
|
15 |
|
16 |
This dataset contains user comments from an Austrian online newspaper.
|
17 |
+
The comments have been annotated by 4 or more out of 11 annotators
|
18 |
as to how strong sexism/mysogyny is present in the comment.
|
19 |
|
20 |
For each comment, the code of the annotator and the label assigned is given for all
|
|
|
22 |
sexism/misogyny present in the comment from 0 (none),
|
23 |
1 (mild), 2 (present), 3 (strong) to 4 (severe).
|
24 |
|
25 |
+
The dataset contains 7984 comments. We provide the data using the same split as was used for
|
26 |
+
the [GermEval2024 GerMS-Detecht shared task](https://ofai.github.io/GermEval2024-GerMS/) with
|
27 |
+
a training set of 5998 comments and a test set of 1986 comments. No dev set is provided as the
|
28 |
+
choice of dev set may be best left to the machine learning researcher/engineer.
|
29 |
|
30 |
A unique propery of this corpus is that it contains only a small portion of sexist/misogynyst remarks
|
31 |
which use strong language, curse-words or otherwise blatantly offending terms, a large number
|
|
|
43 |
* `round`: comments were annotated in rounds of 100, this gives the round identifier as a string containing a two-digit round number, e.g. "00" or "13"
|
44 |
* `source`: the code which identifies how comments which are likely negative and positive examples where selected for the annotation round
|
45 |
|
46 |
+
### Annotator codes - the following table shows the possible annotator codes and the number of comments annotated by each of them for the Train, Test and
|
47 |
+
combined data:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
+
| Annotator code | Train | Test | All |
|
50 |
+
| -- | --: | --: | --: |
|
51 |
+
| A001 | 970 | 328 | 1298 |
|
52 |
+
| A002 | 5998 | 1986 | 7984 |
|
53 |
+
| A003 | 1242 | 456 | 1698 |
|
54 |
+
| A004 | 1394 | 504 | 1898 |
|
55 |
+
| A005 | 1552 | 542 | 2094 |
|
56 |
+
| A007 | 1246 | 451 | 1697 |
|
57 |
+
| A008 | 1849 | 649 | 2498 |
|
58 |
+
| A009 | 2923 | 971 | 3894 |
|
59 |
+
| A010 | 5998 | 1986 | 7984 |
|
60 |
+
| A011 | 927 | 114 | 1041 |
|
61 |
+
| A012 | 5998 | 1661 | 7659 |
|
62 |
|
63 |
+
Annotor IDs are anonymized and deliberately do not give demographic information. Among the 11 annotators there were
|
64 |
+
4 male and 7 female annotators. There were 7 annotators who are content moderators and 4 annotators who are not.
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
|
68 |
## Language
|
|
|
84 |
|
85 |
## Papers
|
86 |
|
87 |
+
Brigitte Krenn, Johann Petrak, Marina Kubina, and Christian Burger. 2024.
|
88 |
+
_Germs-at: A sex-ism/misogyny dataset of forum comments from an Austrian online newspaper._
|
89 |
+
In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7728–7739.
|
90 |
+
|
91 |
+
```
|
92 |
+
@inproceedings{krenn2024,
|
93 |
+
title={{GERMS-AT}: A Sexism/Misogyny Dataset of Forum Comments from an {A}ustrian Online Newspaper},
|
94 |
+
author={Krenn, Brigitte and Petrak, Johann and Kubina, Marina and Burger, Christian},
|
95 |
+
booktitle={Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
|
96 |
+
pages={7728--7739},
|
97 |
+
year={2024}
|
98 |
+
}
|
99 |
+
```
|