Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
Spanish
Size:
10K - 100K
ArXiv:
Tags:
hate_speech
File size: 3,553 Bytes
9dcea05 04b76d4 9dcea05 2d31b12 9dcea05 01aa9dc 133c7c3 9dcea05 6dc5dfb 9dcea05 01aa9dc b790f83 01aa9dc 9dcea05 1aca8ee 716a2be 1aca8ee 716a2be 1aca8ee 9dcea05 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
language:
- es
pretty_name: contextualized_hate_speech
task_categories:
- text-classification
tags:
- hate_speech
size_categories:
- 10K<n<100K
---
# Contextualized Hate Speech: A dataset of comments in news outlets on Twitter
## Dataset Description
- **Repository: [https://github.com/finiteautomata/contextualized-hatespeech-classification](https://github.com/finiteautomata/contextualized-hatespeech-classification)**
- **Paper**: ["Assessing the impact of contextual information in hate speech detection"](https://arxiv.org/abs/2210.00465), Juan Manuel Pérez, Franco Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, Viviana Cotik
- **Point of Contact**: jmperez (at) dc uba ar
### Dataset Summary
![Graphical representation of the dataset](Dataset%20graph.png)
This dataset is a collection of tweets that were posted in response to news articles from five specific Argentinean news outlets: Clarín, Infobae, La Nación, Perfil and Crónica, during the COVID-19 pandemic. The comments were analyzed for hate speech across eight different characteristics: against women, racist content, class hatred, against LGBTQ+ individuals, against physical appearance, against people with disabilities, against criminals, and for political reasons. All the data is in Spanish.
Each comment is labeled with the following variables
| Label | Description |
| :--------- | :---------------------------------------------------------------------- |
| HATEFUL | Contains hate speech (HS)? |
| CALLS | If it is hateful, is this message calling to (possibly violent) action? |
| WOMEN | Is this against women? |
| LGBTI | Is this against LGBTI people? |
| RACISM | Is this a racist message? |
| CLASS | Is this a classist message? |
| POLITICS | Is this HS due to political ideology? |
| DISABLED | Is this HS against disabled people? |
| APPEARANCE | Is this HS against people due to their appearance? (e.g. fatshaming) |
| CRIMINAL | Is this HS against criminals or people in conflict with law? |
There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not.
The `HATEFUL` and `CALLS` labels are binarized by simple majority; the characteristic or category variables are put to `1` if at least one annotator marked it as such.
A raw, non-aggregated version of the dataset can be found at [piuba-bigdata/contextualized_hate_speech_raw](https://huggingface.co/datasets/piuba-bigdata/contextualized_hate_speech_raw)
### Citation Information
```bibtex
@article{perez2022contextual,
author = {Pérez, Juan Manuel and Luque, Franco M. and Zayat, Demian and Kondratzky, Martín and Moro, Agustín and Serrati, Pablo Santiago and Zajac, Joaquín and Miguel, Paula and Debandi, Natalia and Gravano, Agustín and Cotik, Viviana},
journal = {IEEE Access},
title = {Assessing the Impact of Contextual Information in Hate Speech Detection},
year = {2023},
volume = {11},
number = {},
pages = {30575-30590},
doi = {10.1109/ACCESS.2023.3258973}
}
```
### Contributions
[More Information Needed] |