Tetun BERT model
A fine-tune of xlm-roberta-large trained on Tetun data with a masked language modelling objective.
Tetun data used: MADLAD tet clean split (~40k documents).
Trained for 10 epochs with hyper params from the MasakhaNER paper (lr 5e-5 etc).
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support