RoBERTa Tagalog Base (finetuned on COHFIE)
We finetuned RoBERTa Tagalog Base on a subset of the Corpus of Historical Filipino and Philippine English (COHFIE) which contains pure filipino and code-switching (i.e. tagalog-english) sentences. All model details, training setups, and corpus details can be found in this paper: Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings.
Training Data
Sorry, the corpus is not publicly available yet. Stay tuned!
Intended uses & limitations
The intended use of this model is to adapt the pre-trained model to our target corpus, COHFIE, with the intention of clustering sentences. You can finetune this model on downstream NLP tasks in Filipino. This model may not be safe for use in production since we did not examine it for biases. Please use it with caution.
How to use
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("danjohnvelasco/roberta-tagalog-base-cohfie-v1")
model = AutoModel.from_pretrained("danjohnvelasco/roberta-tagalog-base-cohfie-v1")
text = "Replace me with any text in Filipino."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
BibTeX entry and citation info
If you use this model, please cite our work:
@misc{https://doi.org/10.48550/arxiv.2204.03251,
doi = {10.48550/ARXIV.2204.03251},
url = {https://arxiv.org/abs/2204.03251},
author = {Velasco, Dan John and Alba, Axel and Pelagio, Trisha Gail and Ramirez, Bryce Anthony and Cruz, Jan Christian Blaise and Cheng, Charibeth},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
- Downloads last month
- 231