updating Readme
Browse files
README.md
CHANGED
@@ -12,12 +12,7 @@ Specifically, this model is a *bert-base-multilingual-cased* model that was fine
|
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
-
|
16 |
-
from transformers import pipeline
|
17 |
-
>>> from transformers import pipeline
|
18 |
-
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
|
19 |
-
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
|
20 |
-
```
|
21 |
#### Limitations and bias
|
22 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
23 |
## Training data
|
|
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
+
|
|
|
|
|
|
|
|
|
|
|
16 |
#### Limitations and bias
|
17 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
18 |
## Training data
|