A newer version of the Gradio SDK is available:
5.23.2
metadata
title: Misogyny Detection It Space
emoji: π
colorFrom: blue
colorTo: gray
sdk: gradio
sdk_version: 5.12.0
app_file: app.py
pinned: false
license: cc-by-nc-sa-4.0
short_description: Misogyny Detection in Italian Text
Misogyny Detection in Italian Text
This Hugging Face Space demonstrates a misogyny detection system fine-tuned on the AMI (Automatic Misogyny Identification) dataset for Italian text. The model is based on BERT and classifies text into two categories:
- Non-Misogynous (Label = 0): Texts that do not contain misogynistic content.
- Misogynous (Label = 1): Texts that contain misogynistic content.
How to Use
To test the model, simply enter an Italian text in the input field and click "Submit". The model will classify the text as either Misogynous or Non-Misogynous.
Model Details
- Model Type: BERT-based model for text classification.
- Language: Italian.
- License: CC BY-NC-SA 4.0.
- Repository: Hugging Face Model Repository.
- Dataset: The model is fine-tuned on the AMI (Automatic Misogyny Identification) dataset.