YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Model Card for Racist/Sexist Detection BERT
Model Description
This model is a fine-tuned BERT model (bert-base-uncased
) designed for text classification, specifically to detect whether a given text is racist, sexist, or neutral. The model has been trained on labeled data to identify harmful language and categorize it accordingly.
- Developed by: Om1024
Uses
Direct Use
This model can be used to classify text into three categories: racist or sexist based on the content provided.
Out-of-Scope Use
This model is not suitable for tasks other than text classification in the specific domain of racist or sexist language detection.
How to Get Started with the Model
Use the following code to load and use the model:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Om1024/racist-bert")
model = AutoModelForSequenceClassification.from_pretrained("Om1024/racist-bert")
Training Details
- Base Model:
bert-base-uncased
- Fine-tuning Data: Labeled dataset with categories for racist, sexist text.
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.