File size: 1,226 Bytes
7f84543 c13572e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
title: Misogyny Detection It Space
emoji: π
colorFrom: blue
colorTo: gray
sdk: gradio
sdk_version: 5.12.0
app_file: app.py
pinned: false
license: cc-by-nc-sa-4.0
short_description: Misogyny Detection in Italian Text
---
# Misogyny Detection in Italian Text
This Hugging Face Space demonstrates a **misogyny detection system** fine-tuned on the **AMI (Automatic Misogyny Identification)** dataset for Italian text. The model is based on **BERT** and classifies text into two categories:
- **Non-Misogynous (Label = 0)**: Texts that do not contain misogynistic content.
- **Misogynous (Label = 1)**: Texts that contain misogynistic content.
### How to Use
To test the model, simply enter an Italian text in the input field and click "Submit". The model will classify the text as either **Misogynous** or **Non-Misogynous**.
### Model Details
- **Model Type**: BERT-based model for text classification.
- **Language**: Italian.
- **License**: CC BY-NC-SA 4.0.
- **Repository**: [Hugging Face Model Repository](https://huggingface.co/maiurilorenzo/misogyny-detection-it).
- **Dataset**: The model is fine-tuned on the [AMI (Automatic Misogyny Identification) dataset](https://huggingface.co/datasets/sapienzanlp/ami).
|