Bias Detector
This model is fine-tuned using PEFT LoRA on existing Hugging Face models to classify and evaluate the bias in news sources.
Model Details
- Architecture: Transformer-based (e.g., BERT, RoBERTa)
- Fine-tuning Method: Parameter Efficient Fine-Tuning (LoRA)
- Use Case: Bias classification, text summarization, sentiment analysis
- Dataset: ...
- Training Framework: PyTorch + Transformers
Usage
To use this model, install the necessary libraries:
pip install transformers torch
Then load the model with:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "mjwagerman/bias-detector"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "This is an example news headline."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.