File size: 2,782 Bytes
a4e8a49
 
 
 
 
 
 
f467314
e2e2895
5d59eea
a4e8a49
6de3f6b
3b1b588
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f467314
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language: en
datasets:
- cajcodes/political-bias
metrics:
- matthews_corrcoef
- roc_auc
license: mit
widget:
- text: "Tax cuts for the wealthy are essential because they drive economic growth and job creation."
---

# DistilBERT-PoliticalBias

## Overview
`DistilBERT-PoliticalBias` is a DistilBERT-based model fine-tuned to detect and reduce political bias in text. This model employs a novel approach combining diffusion techniques with knowledge distillation from a fine-tuned RoBERTa teacher model to achieve unbiased text representations.

## Training
The model was trained using a synthetic dataset of 658 statements, each rated for bias. These statements were generated by GPT-4, covering a spectrum from highly conservative to highly liberal. The training process involved 21 epochs with a learning rate of 6e-6. The model was optimized using a combination of cross-entropy and KL divergence losses, with temperature scaling to distill knowledge from the teacher model.

### Novel Approach
The training leverages a novel approach where bias is treated as "noise" that the diffusion process aims to eliminate. By using knowledge distillation, the student model learns to align its predictions with the less biased outputs of the teacher model, effectively reducing bias in the resulting text.

## Evaluation
The model achieved the following performance metrics on the validation set:
- **Matthews Correlation Coefficient (MCC)**: 0.593
- **ROC AUC Score**: 0.924

These metrics indicate a strong ability to classify and reduce bias in text.

## Usage
To use this model, you can load it with the Transformers library:

```python
from transformers import DistilBertForSequenceClassification, RobertaTokenizer

model = DistilBertForSequenceClassification.from_pretrained('cajcodes/DistilBERT-PoliticalBias')
tokenizer = RobertaTokenizer.from_pretrained('cajcodes/DistilBERT-PoliticalBias')
```

## Example
```
sample_text = "We need to significantly increase social spending because it will reduce poverty and improve quality of life for all."
inputs = tokenizer(sample_text, return_tensors='pt')
outputs = model(**inputs)
predictions = torch.softmax(outputs.logits, dim=-1)
print(predictions)
```

Dataset
The dataset used for training, cajcodes/political-bias, contains 658 statements with bias ratings generated by GPT-4. The dataset is available for further analysis and model training.

---

license: mit


## Citation

If you use this model or dataset, please cite as follows:
```
@misc{cajcodes_distilbert_political_bias,
  author = Christopher Jones,
  title = {DistilBERT-PoliticalBias: A Novel Approach to Detecting and Reducing Political Bias in Text},
  year = {2024},
  howpublished = {\url{https://huggingface.co/cajcodes/DistilBERT-PoliticalBias}},
}
```

---