|
--- |
|
license: mit |
|
language: |
|
- en |
|
library_name: transformers |
|
--- |
|
|
|
# PoliticalBiasBERT |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
BERT finetuned on many examples of politically biased texts |
|
|
|
Paper and repository coming soon. |
|
## Usage |
|
```py |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
import torch |
|
|
|
text = "your text here" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") |
|
|
|
model = AutoModelForSequenceClassification.from_pretrained("bucketresearch/politicalBiasBERT") |
|
|
|
|
|
inputs = tokenizer(text, return_tensors="pt") |
|
labels = torch.tensor([0]) |
|
outputs = model(**inputs, labels=labels) |
|
loss, logits = outputs[:2] |
|
|
|
# [0] -> left |
|
# [1] -> center |
|
# [2] -> right |
|
print(logits.softmax(dim=-1)[0].tolist()) |
|
|
|
``` |
|
## References |
|
``` |
|
@inproceedings{baly2020we, |
|
author = {Baly, Ramy and Da San Martino, Giovanni and Glass, James and Nakov, Preslav}, |
|
title = {We Can Detect Your Bias: Predicting the Political Ideology of News Articles}, |
|
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, |
|
series = {EMNLP~'20}, |
|
NOmonth = {November}, |
|
year = {2020} |
|
pages = {4982--4991}, |
|
NOpublisher = {Association for Computational Linguistics} |
|
} |
|
|
|
@article{bucket_bias2023, |
|
organization={Bucket Research} |
|
title={Political Bias Classification using finetuned BERT model} |
|
year={2023} |
|
|
|
} |
|
``` |