metadata
language:
- en
metrics:
- f1
Model description
An xlm-roberta-large model fine-tuned on parliamentary speeches (in English) labeled as:
1: contains climate-related illiberal frame
0: does not contain climate-related illiberal frame
Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:
Number of Training Epochs: 5
Batch Size: 8
Learning Rate: 5e-06
Maxlen = 256
Model performance
The model was evaluated on an independent test set of 969 samples.
Weighted F1 score is 0.76.