Multi3Hate / README.md
MinhDucBui's picture
Update README.md
bf9d998 verified
|
raw
history blame
3.95 kB
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: Meme ID
      dtype: int64
    - name: Language
      dtype: string
    - name: Caption
      dtype: string
    - name: US
      dtype: float64
    - name: DE
      dtype: float64
    - name: MX
      dtype: float64
    - name: CN
      dtype: float64
    - name: IN
      dtype: float64
    - name: Template Name
      dtype: string
    - name: Category
      dtype: string
    - name: Subcategory
      dtype: string
  splits:
    - name: train
      num_bytes: 62034606
      num_examples: 1500
  download_size: 62780586
  dataset_size: 62034606
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - en
  - de
  - hi
  - zh
  - es
pretty_name: Multi^3Hate
size_categories:
  - 1K<n<10K

Multi3Hate

Warning: this dataset contains content that may be offensive or upsetting

Multi3Hate dataset, introduced in our paper: Multi3Hate: Advancing Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision–Language Models

Abstract: Hate speech moderation on global platforms poses unique challenges due to the multimodal and multilingual nature of content, along with the varying cultural perceptions. How well do current vision-language models (VLMs) navigate these nuances? To investigate this, we create the first multimodal and multilingual parallel hate speech dataset, annotated by a multicultural set of annotators, called Multi3Hate. It contains 300 parallel meme samples across 5 languages: English, German, Spanish, Hindi, and Mandarin. We demonstrate that cultural background significantly affects multimodal hate speech annotation in our dataset. The average pairwise agreement among countries is just 74%, significantly lower than that of randomly selected annotator groups. Our qualitative analysis indicates that the lowest pairwise label agreement—only 67% between the USA and India—can be attributed to cultural factors. We then conduct experiments with 5 large VLMs in a zero-shot setting, finding that these models align more closely with annotations from the US than with those from other cultures, even when the memes and prompts are presented in the dominant language of the other culture.

Dataset Details

Dataset Sources [optional]

Who are the source data producers?

The original meme images were crawled from https://memegenerator.net

Annotation process

We refer to our paper.

License

CC BY-NC-ND 4.0

Citation [optional]

BibTeX:

@misc{bui2024multi3hatemultimodalmultilingualmulticultural,
      title={Multi3Hate: Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision-Language Models}, 
      author={Minh Duc Bui and Katharina von der Wense and Anne Lauscher},
      year={2024},
      eprint={2411.03888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.03888}, 
}