--- dataset_info: - config_name: Behaviour features: - name: text dtype: string - name: choices sequence: string - name: label dtype: int64 splits: - name: train num_bytes: 883966 num_examples: 5000 - name: test num_bytes: 183067 num_examples: 1000 download_size: 458408 dataset_size: 1067033 - config_name: Synth features: - name: text dtype: string - name: choices sequence: string - name: label dtype: int64 splits: - name: train num_bytes: 800941 num_examples: 7014 - name: test num_bytes: 248483 num_examples: 2908 download_size: 502169 dataset_size: 1049424 configs: - config_name: Behaviour data_files: - split: train path: Behaviour/train-* - split: test path: Behaviour/test-* - config_name: Synth data_files: - split: train path: Synth/train-* - split: test path: Synth/test-* --- # Automatic Misogyny Identification (AMI) Original Paper: https://amievalita2020.github.io Task presented at EVALITA-2020 This task consists of tweet classification, specifically, categorization of the level of misogyny in a given text. We taken both subtasks, *raw_dataset* uploaded as *Behaviour* (3 class classification) and *synthetic* uploaded as *Synth* (2 class classification). Data statistics: - add Proposed Prompts: - add