File size: 1,681 Bytes
fcc8626 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
language: sw
license: mit
datasets: cc100
---
# xlm-roberta-base-focus-extend-kiswahili
XLM-R adapted to Kiswahili using "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models".
Code: https://github.com/konstantinjdobler/focus
Paper: https://arxiv.org/abs/2305.14481
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-kiswahili")
model = AutoModelForMaskedLM.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-kiswahili")
# Use model and tokenizer as usual
```
## Details
The model is based on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was adapted to Kiswahili.
The original multilingual tokenizer was extended with the top 30k tokens of a language-specific Kiswahili tokenizer. The new embeddings were initialized with FOCUS.
The model was then trained on data from CC100 for 390k optimizer steps. More details and hyperparameters can be found [in the paper](https://arxiv.org/abs/2305.14481).
## Disclaimer
The web-scale dataset used for pretraining and tokenizer training (CC100) might contain personal and sensitive information.
Such behavior needs to be assessed carefully before any real-world deployment of the models.
## Citation
Please cite FOCUS as follows:
```bibtex
@misc{dobler-demelo-2023-focus,
title={FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models},
author={Konstantin Dobler and Gerard de Melo},
year={2023},
eprint={2305.14481},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|