cardiffnlp/roberta-large-tweet-topic-single-2020
This model is a fine-tuned version of roberta-large on the tweet_topic_single. This model is fine-tuned on train_2020
split and validated on test_2021
split of tweet_topic.
Fine-tuning script can be found here. It achieves the following results on the test_2021 set:
- F1 (micro): 0.8789131718842291
- F1 (macro): 0.7056499344872201
- Accuracy: 0.8789131718842291
Usage
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/roberta-large-tweet-topic-single-2020")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
Reference
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
- Downloads last month
- 55
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train cardiffnlp/roberta-large-tweet-topic-single-2020
Evaluation results
- F1 on cardiffnlp/tweet_topic_singleself-reported0.879
- F1 (macro) on cardiffnlp/tweet_topic_singleself-reported0.706
- Accuracy on cardiffnlp/tweet_topic_singleself-reported0.879