File size: 1,433 Bytes
22ed57b
 
9306ad2
 
 
22ed57b
c1ba41f
 
 
 
 
9306ad2
 
 
c1ba41f
 
 
 
 
 
 
66e3fca
 
 
 
 
c1ba41f
f964ce8
c1ba41f
 
 
 
 
 
f964ce8
c1ba41f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: mit
language:
- en
library_name: transformers
---

# PoliticalBiasBERT

<!-- Provide a quick summary of what the model is/does. -->

BERT finetuned on many examples of politically biased texts

Paper and repository coming soon.
## Usage
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

text = "your text here"

tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

model = AutoModelForSequenceClassification.from_pretrained("bucketresearch/politicalBiasBERT")


inputs = tokenizer(text, return_tensors="pt")
labels = torch.tensor([0])
outputs = model(**inputs, labels=labels)
loss, logits = outputs[:2]

# [0] -> left 
# [1] -> center
# [2] -> right
print(logits.softmax(dim=-1)[0].tolist()) 

```
## References
```
@inproceedings{baly2020we,
  author      = {Baly, Ramy and Da San Martino, Giovanni and Glass, James and Nakov, Preslav},
  title       = {We Can Detect Your Bias: Predicting the Political Ideology of News Articles},
  booktitle   = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  series      = {EMNLP~'20},
  NOmonth     = {November},
  year        = {2020}
  pages       = {4982--4991},
  NOpublisher = {Association for Computational Linguistics}
}

@article{bucket_bias2023,
  organization={Bucket Research}
  title={Political Bias Classification using finetuned BERT model}
  year={2023}

}
```