updating readme
Browse files
README.md
CHANGED
@@ -1,57 +1,54 @@
|
|
1 |
Hugging Face's logo
|
2 |
---
|
3 |
-
language:
|
4 |
datasets:
|
5 |
|
6 |
---
|
7 |
-
# xlm-roberta-base-finetuned-
|
8 |
## Model description
|
9 |
-
**xlm-roberta-base-finetuned-
|
10 |
|
11 |
-
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
```python
|
16 |
>>> from transformers import pipeline
|
17 |
-
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-
|
18 |
-
>>> unmasker("
|
19 |
|
20 |
-
[{'sequence': '
|
21 |
-
'
|
22 |
-
'
|
23 |
-
'
|
24 |
-
|
25 |
-
'
|
26 |
-
'
|
27 |
-
'
|
28 |
-
|
29 |
-
'score': 0.
|
30 |
-
'token':
|
31 |
-
'token_str': '
|
32 |
-
{'sequence': '
|
33 |
-
'
|
34 |
-
'
|
35 |
-
|
36 |
-
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
|
37 |
-
'score': 0.009554869495332241,
|
38 |
-
'token': 185918,
|
39 |
-
'token_str': 'Marseille'}]
|
40 |
|
41 |
|
42 |
```
|
43 |
#### Limitations and bias
|
44 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
45 |
## Training data
|
46 |
-
This model was fine-tuned on [
|
47 |
|
48 |
## Training procedure
|
49 |
This model was trained on a single NVIDIA V100 GPU
|
50 |
|
51 |
## Eval results on Test set (F-score, average over 5 runs)
|
52 |
-
Dataset| XLM-R F1 |
|
53 |
-|-|-
|
54 |
-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) |
|
|
|
55 |
|
56 |
### BibTeX entry and citation info
|
57 |
By David Adelani
|
|
|
1 |
Hugging Face's logo
|
2 |
---
|
3 |
+
language: yo
|
4 |
datasets:
|
5 |
|
6 |
---
|
7 |
+
# xlm-roberta-base-finetuned-yoruba
|
8 |
## Model description
|
9 |
+
**xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
|
10 |
|
11 |
+
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus.
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
```python
|
16 |
>>> from transformers import pipeline
|
17 |
+
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba')
|
18 |
+
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
|
19 |
|
20 |
+
[{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173,
|
21 |
+
'token': 44109,
|
22 |
+
'token_str': '▁Queen'},
|
23 |
+
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284,
|
24 |
+
'token': 1350,
|
25 |
+
'token_str': '▁ile'},
|
26 |
+
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546,
|
27 |
+
'token': 1053,
|
28 |
+
'token_str': '▁ti'},
|
29 |
+
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865,
|
30 |
+
'token': 12878,
|
31 |
+
'token_str': '▁baba'},
|
32 |
+
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238,
|
33 |
+
'token': 82879,
|
34 |
+
'token_str': '▁Oba'}]
|
35 |
+
|
|
|
|
|
|
|
|
|
36 |
|
37 |
|
38 |
```
|
39 |
#### Limitations and bias
|
40 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
41 |
## Training data
|
42 |
+
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
|
43 |
|
44 |
## Training procedure
|
45 |
This model was trained on a single NVIDIA V100 GPU
|
46 |
|
47 |
## Eval results on Test set (F-score, average over 5 runs)
|
48 |
+
Dataset| XLM-R F1 | yo_roberta F1
|
49 |
-|-|-
|
50 |
+
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66
|
51 |
+
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | |
|
52 |
|
53 |
### BibTeX entry and citation info
|
54 |
By David Adelani
|