RoBERTa (Large) fine-tuned on Donald Trump's speeches with populism tags

This model is released as part of "Identifying Fine-grained Forms of Populism in Political Discourse: A Case Study on Donald Trump’s Presidential Campaigns" (Chalkidis et al., 2025). The model was fine-tuned from RoBERTa large (https://huggingface.co/FacebookAI/roberta-large) using the Donald Trump Populism 2016 dataset (https://huggingface.co/datasets/coastalcph/populism-trump-2016).

Intended uses & limitations

The intended use is for academic research for the identification of fine-grained forms of populism

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • seed: 42
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • n_epochs: 3

Framework versions

  • Transformers 4.33.1
  • Pytorch 1.12.0+cu102
  • Datasets 2.14.5
  • Tokenizers 0.13.3

Citation Information

Identifying Fine-grained Forms of Populism in Political Discourse: A Case Study on Donald Trump’s Presidential Campaigns. Ilias Chalkidis, Stephanie Brandl, and Paris Aslanidis. Arxiv Preprint, 2025.

@misc{chalkidis-et-al-2025-populism,
    title = "Identifying Fine-grained Forms of Populism in Political Discourse: A Case Study on Donald Trump’s Presidential Campaigns",
    author = "Chalkidis, Ilias and Brandl, Stephanie and Aslanidis, Paris",
    year = "2025",
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2507.19303}
}
Downloads last month
10
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for coastalcph/roberta-large-ft-trump-populism

Finetuned
(380)
this model

Dataset used to train coastalcph/roberta-large-ft-trump-populism

Collection including coastalcph/roberta-large-ft-trump-populism