PropagandaDetection / README.md
claudio sstt
Update README.md
51ed51a
|
raw
history blame
955 Bytes
metadata
language:
  - en
metrics:
  - accuracy
pipeline_tag: text-classification

PropagandaDetection

The model is a Transformer network based on a DistilBERT pre-trained model. The pre-trained model is fine-tuned on the SemEval 2023 Task 3 training dataset for the propaganda detection task. To fine-tune the Transformer Distilbert-Base-Uncased, the following hyperparameters are used: the batch size of $16$; learning rate of $2e^{-5}$; AdamW optimizer; $4$ epochs. Tests provide an accuracy of around $90%$.

References

@inproceedings{bangerter2023unisa,
  title={Unisa at SemEval-2023 task 3: a shap-based method for propaganda detection},
  author={Bangerter, Micaela and Fenza, Giuseppe and Gallo, Mariacristina and Loia, Vincenzo and Volpe, Alberto and De Maio, Carmen and Stanzione, Claudio},
  booktitle={Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)},
  pages={885--891},
  year={2023}
}