repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Davlan/bert-base-multilingual-cased-finetuned-amharic
|
Davlan
|
bert
| 8 | 60 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,647 |
Hugging Face's logo
---
language: am
datasets:
---
# bert-base-multilingual-cased-finetuned-amharic
## Model description
**bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning **bert-base-multilingual-cased** model on Amharic language texts. It provides **better performance** than the multilingual Amharic on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Amharic corpus using Amharic vocabulary.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-amharic')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን [MASK] መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | am_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 0.0 | 60.89
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-hausa
|
Davlan
|
bert
| 10 | 50 |
transformers
| 0 |
fill-mask
| true | true | true | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,620 |
Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-hausa
## Model description
**bert-base-multilingual-cased-finetuned-hausa** is a **Hausa BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Hausa language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-hausa')
>>> unmasker("Shugaban [MASK] Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence':
'[CLS] Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]',
'score': 0.9762618541717529,
'token': 22045,
'token_str': 'Nigeria'},
{'sequence': '[CLS] Shugaban Ka Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.007239189930260181,
'token': 25444,
'token_str': 'Ka'},
{'sequence': '[CLS] Shugaban, Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001990817254409194,
'token': 117,
'token_str': ','},
{'sequence': '[CLS] Shugaban Ghana Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001566368737258017,
'token': 28682,
'token_str': 'Ghana'},
{'sequence': '[CLS] Shugabanmu Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.0009375187801197171,
'token': 11717,
'token_str': '##mu'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ha_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.65 | 91.31
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | 84.76 | 90.98
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-igbo
|
Davlan
|
bert
| 8 | 33 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,572 |
Hugging Face's logo
---
language: ig
datasets:
---
# bert-base-multilingual-cased-finetuned-igbo
## Model description
**bert-base-multilingual-cased-finetuned-igbo** is a **Igbo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Igbo language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị [MASK] enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ig_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 85.11 | 86.75
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda
|
Davlan
|
bert
| 8 | 26 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,570 |
Hugging Face's logo
---
language: rw
datasets:
---
# bert-base-multilingual-cased-finetuned-kinyarwanda
## Model description
**bert-base-multilingual-cased-finetuned-kinyarwanda** is a **Kinyarwanda BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Kinyarwanda language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu [MASK] hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | rw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 72.20 | 77.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-luganda
|
Davlan
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,611 |
Hugging Face's logo
---
language: lg
datasets:
---
# bert-base-multilingual-cased-finetuned-luganda
## Model description
**bert-base-multilingual-cased-finetuned-luganda** is a **Luganda BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Luganda language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Luganda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-luganda')
>>> unmasker("Ffe tulwanyisa abo abaagala okutabangula [MASK], Kimuli bwe yategeezezza.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BUKKEDDE](https://github.com/masakhane-io/masakhane-ner/tree/main/text_by_language/luganda) +[Luganda CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | lg_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 80.36 | 84.70
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-luo
|
Davlan
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,402 |
Hugging Face's logo
---
language: luo
datasets:
---
# bert-base-multilingual-cased-finetuned-luo
## Model description
**bert-base-multilingual-cased-finetuned-luo** is a **Luo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Luo language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Luo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-luo')
>>> unmasker("Obila ma Changamwe [MASK] pedho achije angwen mag njore")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | luo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 74.22 | 75.59
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-naija
|
Davlan
|
bert
| 8 | 32 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,537 |
Hugging Face's logo
---
language: pcm
datasets:
---
# bert-base-multilingual-cased-finetuned-naija
## Model description
**bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | pcm_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-swahili
|
Davlan
|
bert
| 9 | 96 |
transformers
| 2 |
fill-mask
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,492 |
Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-swahili
## Model description
**bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.31642526388168335,
'token': 10728,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa',
'score': 0.15753623843193054,
'token': 57557,
'token_str': 'Rwanda'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa',
'score': 0.07211585342884064,
'token': 57824,
'token_str': 'Burundi'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.029844321310520172,
'token': 10688,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa',
'score': 0.0265930388122797,
'token': 38052,
'token_str': 'Senegal'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | sw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 | 89.36
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-wolof
|
Davlan
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,551 |
Hugging Face's logo
---
language: wo
datasets:
---
# bert-base-multilingual-cased-finetuned-wolof
## Model description
**bert-base-multilingual-cased-finetuned-wolof** is a **Wolof BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Wolof language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Wolof corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-wolof')
>>> unmasker("Màkki Sàll feeñal na ay xalaatam ci mbir yu am solo yu soxal [MASK] ak Afrik.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Bible OT](http://biblewolof.com/) + [OPUS](https://opus.nlpl.eu/) + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | wo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 64.52 | 69.43
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-yoruba
|
Davlan
|
bert
| 10 | 46 |
transformers
| 0 |
fill-mask
| true | true | true | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,868 |
Hugging Face's logo
---
language: yo
datasets:
---
# bert-base-multilingual-cased-finetuned-yoruba
## Model description
**bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746,
'token': 12176,
'token_str': 'Mary'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092,
'token': 13704,
'token_str': 'Queen'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615,
'token': 14382,
'token_str': 'ti'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525,
'token': 11515,
'token_str': 'King'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962,
'token': 14005,
'token_str': 'Lady'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | yo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 78.97 | 82.58
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | 75.13 | 79.11
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-masakhaner
|
Davlan
|
bert
| 9 | 122 |
transformers
| 0 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,564 |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**bert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned mBERT base model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.66
ibo |85.72
kin |71.94
lug |81.73
luo |77.39
pcm |88.96
swa |88.23
wol |66.27
yor |80.09
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Davlan/bert-base-multilingual-cased-ner-hrl
|
Davlan
|
bert
| 9 | 495,029 |
transformers
| 25 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
[]
| false | false | true | 3,075 |
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# bert-base-multilingual-cased-ner-hrl
## Model description
**bert-base-multilingual-cased-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
|
Davlan/byt5-base-eng-yor-mt
|
Davlan
|
t5
| 7 | 4 |
transformers
| 1 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,147 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-eng-yor-mt
## Model description
**byt5-base-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/byt5-base-yor-eng-mt
|
Davlan
|
t5
| 7 | 1 |
transformers
| 1 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,144 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-yor-eng-mt
## Model description
**byt5-base-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves 14.05 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/distilbert-base-multilingual-cased-masakhaner
|
Davlan
|
distilbert
| 9 | 6 |
transformers
| 1 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,519 |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**distilbert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.88
ibo |84.87
kin |74.19
lug |78.43
luo |73.32
pcm |87.98
swa |86.20
wol |64.67
yor |78.10
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Davlan/distilbert-base-multilingual-cased-ner-hrl
|
Davlan
|
distilbert
| 9 | 1,763 |
transformers
| 8 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 3,113 |
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# distilbert-base-multilingual-cased-ner-hrl
## Model description
**distilbert-base-multilingual-cased-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned Distiled BERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
|
Davlan/m2m100_418M-eng-yor-mt
|
Davlan
|
m2m_100
| 9 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,173 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/m2m100_418M-yor-eng-mt
|
Davlan
|
m2m_100
| 9 | 11 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,174 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mT5_base_yoruba_adr
|
Davlan
|
mt5
| 9 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,757 |
Hugging Face's logo
---
language: yo
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yoruba_adr
## Model description
**mT5_base_yoruba_adr** is a **automatic diacritics restoration** model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the **state-of-the-art performance** for adding the correct diacritics or tonal marks to Yorùbá texts.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for ADR.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("")
model = AutoModelForTokenClassification.from_pretrained("")
nlp = pipeline("", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
64.63 BLEU on [Global Voices test set](https://arxiv.org/abs/2003.10564)
70.27 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By Jesujoba Alabi and David Adelani
```
```
|
Davlan/mbart50-large-eng-yor-mt
|
Davlan
|
mbart
| 9 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,379 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-eng-yor-mt
## Model description
**mbart50-large-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbarr50-large achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mbart50-large-yor-eng-mt
|
Davlan
|
mbart
| 9 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,380 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-yor-eng-mt
## Model description
**mbart50-large-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbart50-large achieves **15.88 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mt5_base_eng_yor_mt
|
Davlan
|
mt5
| 9 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,705 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_eng_yor_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Where are you?"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
9.82 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mt5_base_yor_eng_mt
|
Davlan
|
mt5
| 9 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,751 |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yor_eng_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_yor_eng_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Akọni ajìjàgbara obìnrin tó sun àtìmalé torí owó orí"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
15.57 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/naija-twitter-sentiment-afriberta-large
|
Davlan
|
xlm-roberta
| 10 | 148 |
transformers
| 2 |
text-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,710 |
Hugging Face's logo
---
language:
- hau
- ibo
- pcm
- yor
- multilingual
---
# naija-twitter-sentiment-afriberta-large
## Model description
**naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti).
The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for Sentiment Classification.
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = "Davlan/naija-twitter-sentiment-afriberta-large"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = "I like you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
id2label = {0:"positive", 1:"neutral", 2:"negative"}
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
#### Limitations and bias
This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains.
## Training procedure
This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277).
## Eval results on Test set (F-score), average over 5 runs.
language|F1-score
-|-
hau |81.2
ibo |80.8
pcm |74.5
yor |80.4
### BibTeX entry and citation info
```
@inproceedings{Muhammad2022NaijaSentiAN,
title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis},
author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil},
year={2022}
}
```
|
Davlan/xlm-roberta-base-finetuned-amharic
|
Davlan
|
xlm-roberta
| 8 | 70 |
transformers
| 1 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,446 |
Hugging Face's logo
---
language: am
datasets:
---
# xlm-roberta-base-finetuned-amharic
## Model description
**xlm-roberta-base-finetuned-amharic** is a **Amharic RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Amharic language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Amharic corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን <mask> መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | am_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 70.96 | 77.97
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-hausa
|
Davlan
|
xlm-roberta
| 8 | 38 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,530 |
Hugging Face's logo
---
language: ha
datasets:
---
# xlm-roberta-base-finetuned-hausa
## Model description
**xlm-roberta-base-finetuned-hausa** is a **Hausa RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("Shugaban <mask> Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence': '<s> Shugaban kasa Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>',
'score': 0.8104371428489685,
'token': 29762,
'token_str': '▁kasa'},
{'sequence': '<s> Shugaban Najeriya Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.17371904850006104,
'token': 49173,
'token_str': '▁Najeriya'},
{'sequence': '<s> Shugaban kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.006917025428265333,
'token': 21221,
'token_str': '▁kasar'},
{'sequence': '<s> Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.005785710643976927,
'token': 72620,
'token_str': '▁Nigeria'},
{'sequence': '<s> Shugaban Kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.0010596115607768297,
'token': 170255,
'token_str': '▁Kasar'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ha_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.10 | 91.47
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-igbo
|
Davlan
|
xlm-roberta
| 8 | 24 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,491 |
Hugging Face's logo
---
language: ig
datasets:
---
# xlm-roberta-base-finetuned-igbo
## Model description
**xlm-roberta-base-finetuned-igbo** is a **Igbo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị <mask> enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ig_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 84.51 | 87.74
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-kinyarwanda
|
Davlan
|
xlm-roberta
| 8 | 15 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,512 |
Hugging Face's logo
---
language: rw
datasets:
---
# xlm-roberta-base-finetuned-kinyarwanda
## Model description
**xlm-roberta-base-finetuned-kinyarwanda** is a **Kinyarwanda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Kinyarwanda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu <mask> hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | rw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 73.22 | 77.76
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-luganda
|
Davlan
|
xlm-roberta
| 8 | 4 |
transformers
| 1 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,529 |
Hugging Face's logo
---
language: lg
datasets:
---
# xlm-roberta-base-finetuned-luganda
## Model description
**xlm-roberta-base-finetuned-luganda** is a **Luganda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Luganda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Luganda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-luganda')
>>> unmasker("Ffe tulwanyisa abo abaagala okutabangula <mask>, Kimuli bwe yategeezezza.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BUKKEDDE](https://github.com/masakhane-io/masakhane-ner/tree/main/text_by_language/luganda) +[Luganda CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | lg_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 79.69 | 84.70
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-luo
|
Davlan
|
xlm-roberta
| 8 | 12 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,344 |
Hugging Face's logo
---
language: luo
datasets:
---
# xlm-roberta-base-finetuned-luo
## Model description
**xlm-roberta-base-finetuned-luo** is a **Luo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Luo language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Luo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-luo')
>>> unmasker("Obila ma Changamwe <mask> pedho achije angwen mag njore")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | luo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 74.86 | 75.27
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-naija
|
Davlan
|
xlm-roberta
| 8 | 10 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,480 |
Hugging Face's logo
---
language: pcm
datasets:
---
# xlm-roberta-base-finetuned-naija
## Model description
**xlm-roberta-base-finetuned-naija** is a **Nigerian Pidgin RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Nigerian Pidgin language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Nigerian Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March <mask> year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | pcm_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.26 | 90.00
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-swahili
|
Davlan
|
xlm-roberta
| 8 | 29 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,436 |
Hugging Face's logo
---
language: sw
datasets:
---
# xlm-roberta-base-finetuned-swahili
## Model description
**xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
'score': 0.5077782273292542,
'token': 190096,
'token_str': 'Ufaransa'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.3657738268375397,
'token': 7270,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
'score': 0.01592041552066803,
'token': 176392,
'token_str': 'Gabon'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.010881908237934113,
'token': 9942,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
'score': 0.009554869495332241,
'token': 185918,
'token_str': 'Marseille'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | sw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-wolof
|
Davlan
|
xlm-roberta
| 8 | 12 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 1,495 |
Hugging Face's logo
---
language: wo
datasets:
---
# xlm-roberta-base-finetuned-wolof
## Model description
**xlm-roberta-base-finetuned-luganda** is a **Wolof RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Wolof language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Wolof corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-wolof')
>>> unmasker("Màkki Sàll feeñal na ay xalaatam ci mbir yu am solo yu soxal <mask> ak Afrik.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Bible OT](http://biblewolof.com/) + [OPUS](https://opus.nlpl.eu/) + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | wo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 63.86 | 68.31
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-yoruba
|
Davlan
|
xlm-roberta
| 8 | 17 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,799 |
Hugging Face's logo
---
language: yo
datasets:
---
# xlm-roberta-base-finetuned-yoruba
## Model description
**xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173,
'token': 44109,
'token_str': '▁Queen'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284,
'token': 1350,
'token_str': '▁ile'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546,
'token': 1053,
'token_str': '▁ti'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865,
'token': 12878,
'token_str': '▁baba'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238,
'token': 82879,
'token_str': '▁Oba'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | yo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-masakhaner
|
Davlan
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,379 |
Hugging Face's logo
---
language:
- am
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-base-masakhaner
## Model description
**xlm-roberta-base-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Davlan/xlm-roberta-base-ner-hrl
|
Davlan
|
xlm-roberta
| 8 | 14,124 |
transformers
| 9 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | false | true | 3,020 |
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# xlm-roberta-base-ner-hrl
## Model description
**xlm-roberta-base-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
|
Davlan/xlm-roberta-base-sadilar-ner
|
Davlan
|
xlm-roberta
| 8 | 9 |
transformers
| 1 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,592 |
Hugging Face's logo
---
language:
- af
- nr
- nso
- ss
- st
- tn
- ts
- ve
- xh
- zu
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-base-sadilar-ner
## Model description
**xlm-roberta-base-sadilar-ner** is the first **Named Entity Recognition** model for 10 South African languages (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of South African languages datasets obtained from [SADILAR](https://www.sadilar.org/index.php/en/) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kuchaza kona ukuthi uMengameli uMnuz Cyril Ramaphosa, usebatshelile ukuthi uzosikhipha maduze isitifiketi."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) [SADILAR](https://www.sadilar.org/index.php/en/) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
### BibTeX entry and citation info
```
|
Davlan/xlm-roberta-base-wikiann-ner
|
Davlan
|
xlm-roberta
| 10 | 609 |
transformers
| 3 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,730 |
Hugging Face's logo
---
language:
- ar
- as
- bn
- ca
- en
- es
- eu
- fr
- gu
- hi
- id
- ig
- mr
- pa
- pt
- sw
- ur
- vi
- yo
- zh
- multilingual
datasets:
- wikiann
---
# xlm-roberta-base-wikiann-ner
## Model description
**xlm-roberta-base-wikiann-ner** is the first **Named Entity Recognition** model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of languages datasets obtained from [WikiANN](https://huggingface.co/datasets/wikiann) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base-wikiann-ner")
model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-base-wikiann-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ìbọn ń ró kù kù gẹ́gẹ́ bí ọwọ́ ọ̀pọ̀ aráàlù ṣe tẹ ìbọn ní Kyiv láti dojú kọ Russia"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)[wikiann](https://huggingface.co/datasets/wikiann).
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
### BibTeX entry and citation info
```
|
Davlan/xlm-roberta-large-masakhaner
|
Davlan
|
xlm-roberta
| 9 | 581 |
transformers
| 0 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,560 |
Hugging Face's logo
---
language:
- amh
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-large-masakhaner
## Model description
**xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
amh |75.76
hau |91.75
ibo |86.26
kin |76.38
lug |84.64
luo |80.65
pcm |89.55
swa |89.48
wol |70.70
yor |82.05
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Davlan/xlm-roberta-large-ner-hrl
|
Davlan
|
xlm-roberta
| 9 | 6,029 |
transformers
| 8 |
token-classification
| true | true | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 3,026 |
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# xlm-roberta-large-ner-hrl
## Model description
**xlm-roberta-large-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa large model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
|
DeadBeast/emoBERTTamil
|
DeadBeast
|
bert
| 13 | 5 |
transformers
| 2 |
text-classification
| true | false | false |
apache-2.0
| null |
['tamilmixsentiment']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,162 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emoBERTTamil
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tamilmixsentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9666
- Accuracy: 0.671
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1128 | 1.0 | 250 | 1.0290 | 0.672 |
| 1.0226 | 2.0 | 500 | 1.0172 | 0.686 |
| 0.9137 | 3.0 | 750 | 0.9666 | 0.671 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
DeadBeast/korscm-mBERT
|
DeadBeast
|
bert
| 8 | 2 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
|
['korean']
|
['Korean-Sarcasm']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 368 |
# **Korean-mBERT**
This model is a fine-tune checkpoint of mBERT-base-cased over **Hugging Face Kore_Scm** dataset for Text classification.
### **How to use?**
**Task**: binary-classification
- LABEL_1: Sarcasm (*Sarcasm means tweets contains sarcasm*)
- LABEL_0: Not Sarcasm (*Not Sarcasm means tweets do not contain sarcasm*)
Click on **Use in Transformers**!
|
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews
|
DeadBeast
|
bert
| 8 | 6 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
|
['bengali']
|
['BanFakeNews']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 804 |
# **mBERT-base-cased-finetuned-bengali-fakenews**
This model is a fine-tune checkpoint of mBERT-base-cased over **[Bengali-fake-news Dataset](https://www.kaggle.com/cryptexcode/banfakenews)** for Text classification. This model reaches an accuracy of 96.3 with an f1-score of 79.1 on the dev set.
### **How to use?**
**Task**: binary-classification
- LABEL_1: Authentic (*Authentic means news is authentic*)
- LABEL_0: Fake (*Fake means news is fake*)
```
from transformers import pipeline
print(pipeline("sentiment-analysis",model="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews",tokenizer="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews")("অভিনেতা আফজাল শরীফকে ২০ লাখ টাকার অনুদান অসুস্থ অভিনেতা আফজাল শরীফকে চিকিৎসার জন্য ২০ লাখ টাকা অনুদান দিয়েছেন প্রধানমন্ত্রী শেখ হাসিনা।"))
```
|
DeepChem/ChemBERTa-10M-MTR
|
DeepChem
|
roberta
| 11 | 784 |
transformers
| 0 | null | true | false | false | null | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['roberta']
| false | true | true | 3,557 |
# Model Card for ChemBERTa-10M-MTR
# Model Details
## Model Description
More information needed
- **Developed by:** DeepChem
- **Shared by [Optional]:** DeepChem
- **Model type:** Token Classification
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** [RoBERTa](https://huggingface.co/roberta-base?text=The+goal+of+life+is+%3Cmask%3E.)
- **Resources for more information:** More information needed
# Uses
## Direct Use
More information needed.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@book{Ramsundar-et-al-2019,
title={Deep Learning for the Life Sciences},
author={Bharath Ramsundar and Peter Eastman and Patrick Walters and Vijay Pande and Karl Leswing and Zhenqin Wu},
publisher={O'Reilly Media},
note={\url{https://www.amazon.com/Deep-Learning-Life-Sciences-Microscopy/dp/1492039837}},
year={2019}
}
```
**APA:**
More information needed
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
DeepChem in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, RobertaForRegression
tokenizer = AutoTokenizer.from_pretrained("DeepChem/ChemBERTa-10M-MTR")
model = RobertaForRegression.from_pretrained("DeepChem/ChemBERTa-10M-MTR")
```
</details>
|
DeepESP/gpt2-spanish-medium
|
DeepESP
|
gpt2
| 10 | 265 |
transformers
| 3 |
text-generation
| true | true | true |
mit
|
['es']
|
['ebooks']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['GPT-2', 'Spanish', 'ebooks', 'nlg']
| false | true | true | 1,845 |
# GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the medium version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
DeepESP/gpt2-spanish
|
DeepESP
|
gpt2
| 10 | 1,231 |
transformers
| 16 |
text-generation
| true | true | true |
mit
|
['es']
|
['ebooks']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['GPT-2', 'Spanish', 'ebooks', 'nlg']
| false | true | true | 1,844 |
# GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased
|
DeepPavlov
|
bert
| 8 | 542 |
transformers
| 0 |
feature-extraction
| true | false | true | null |
['bg', 'cs', 'pl', 'ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 601 |
# bert-base-bg-cs-pl-ru-cased
SlavicBERT\[1\] \(Slavic \(bg, cs, pl, ru\), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. \(2019\). [Tuning Multilingual Transformers for Language-Specific Named Entity Recognition](https://www.aclweb.org/anthology/W19-3712/). ACL anthology W19-3712.
|
DeepPavlov/bert-base-cased-conversational
|
DeepPavlov
|
bert
| 8 | 3,264 |
transformers
| 3 |
feature-extraction
| true | false | true | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,178 |
# bert-base-cased-conversational
Conversational BERT \(English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters\) was trained on the English part of Twitter, Reddit, DailyDialogues\[1\], OpenSubtitles\[2\], Debates\[3\], Blogs\[4\], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017.
\[2\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[3\]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016.
\[4\]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker \(2006\). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.
|
DeepPavlov/bert-base-multilingual-cased-sentence
|
DeepPavlov
|
bert
| 8 | 829 |
transformers
| 1 |
feature-extraction
| true | false | true | null |
['multilingual']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 996 |
# bert-base-multilingual-cased-sentence
Sentence Multilingual BERT \(101 languages, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) is a representation‑based sentence encoder for 101 languages of Multilingual BERT. It is initialized with Multilingual BERT and then fine‑tuned on english MultiNLI\[1\] and on dev set of multilingual XNLI\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: Williams A., Nangia N. & Bowman S. \(2017\) A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv preprint [arXiv:1704.05426](https://arxiv.org/abs/1704.05426)
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
|
DeepPavlov/distilrubert-base-cased-conversational
|
DeepPavlov
|
distilbert
| 7 | 6,673 |
transformers
| 2 | null | true | false | false | null |
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,188 |
# distilrubert-base-cased-conversational
Conversational DistilRuBERT \(Russian, cased, 6‑layer, 768‑hidden, 12‑heads, 135.4M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)).
Our DistilRuBERT was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss between mean of two consecutive hidden states of the teacher and one hidden state of the student
The model was trained for about 100 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-base-cased-conversational)| 517 | 0.3285 | 0.0212 | 0.5803 | 52.2495 |
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
|
DeepPavlov/distilrubert-tiny-cased-conversational-v1
|
DeepPavlov
|
distilbert
| 7 | 7,363 |
transformers
| 1 | null | true | false | false | null |
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,852 |
# distilrubert-tiny-cased-conversational
Conversational DistilRuBERT-tiny \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 10.4M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as tiny copy of [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
Our DistilRuBERT-tiny is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
* MLM loss (between token labels and student output distribution)
* MSE loss (between averaged student and teacher hidden states)
The key features are:
* unlike most of distilled language models, we **didn't** use KL loss during pre-training
* reduced vocabulary size (30K in *tiny* vs. 100K in *base* and *small* )
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
| Model name | \# params, M | \# vocab, K | Mem., MB |
|---|---|---|---|
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
| `cointegrated/rubert-tiny` | 11.8 | **30** | 46 |
| **distilrubert-tiny-cased-conversational** | **10.4** | 31 | **41** |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| `rubert-base-cased-conversational` | 1 | 512 | 0.147 | 0.014 | 897 | 1531 |
| `distilrubert-base-cased-conversational` | 1 | 512 | 0.083 | 0.006 | 766 | 1423 |
| `distilrubert-small-cased-conversational` | 1 | 512 | 0.03 | **0.002** | 600 | 1243 |
| `cointegrated/rubert-tiny` | 1 | 512 | 0.041 | 0.003 | 272 | 919 |
| **distilrubert-tiny-cased-conversational** | 1 | 512 | **0.023** | 0.003 | **206** | **855** |
| `rubert-base-cased-conversational` | 16 | 512 | 2.839 | 0.182 | 1499 | 2071 |
| `distilrubert-base-cased-conversational` | 16 | 512 | 1.065 | 0.055 | 2541 | 2927 |
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.373 | **0.003** | 1360 | 1943 |
| `cointegrated/rubert-tiny` | 16 | 512 | 0.628 | 0.004 | 1293 | 2221 |
| **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
\[5\]: <https://habr.com/ru/post/562064/>, <https://huggingface.co/cointegrated/rubert-tiny>
|
DeepPavlov/distilrubert-tiny-cased-conversational
|
DeepPavlov
|
distilbert
| 7 | 9,712 |
transformers
| 1 | null | true | false | false | null |
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,155 |
WARNING: This is `distilrubert-small-cased-conversational` model uploaded with wrong name. This one is the same as [distilrubert-small-cased-conversational](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational). `distilrubert-tiny-cased-conversational` could be found in [distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1).
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
|
DeepPavlov/roberta-large-winogrande
|
DeepPavlov
|
roberta
| 9 | 4 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['winogrande']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,488 |
# RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
[WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way:
1. Each sentence was split on "`_`" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
```json
{
"answer": "2",
"option1": "plant",
"option2": "urn",
"sentence": "The plant took up too much room in the urn, because the _ was small."
}
```
becomes
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "plant was small.",
"label": false
}
```
and
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "urn was small.",
"label": true
}
```
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
```bibtex
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
DeepPavlov/rubert-base-cased-conversational
|
DeepPavlov
|
bert
| 8 | 30,275 |
transformers
| 8 |
feature-extraction
| true | false | true | null |
['ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 906 |
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with [RuBERT](../rubert-base-cased).
08.11.2021: upload model with MLM and NSP heads
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
|
DeepPavlov/rubert-base-cased-sentence
|
DeepPavlov
|
bert
| 8 | 41,967 |
transformers
| 5 |
feature-extraction
| true | false | true | null |
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 947 |
# rubert-base-cased-sentence
Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326)
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
|
DeepPavlov/rubert-base-cased
|
DeepPavlov
|
bert
| 8 | 122,894 |
transformers
| 20 |
feature-extraction
| true | false | true | null |
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 553 |
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\].
08.11.2021: upload model with MLM and NSP heads
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
|
DeividasM/wav2vec2-large-xlsr-53-lithuanian
|
DeividasM
|
wav2vec2
| 9 | 17 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['lt']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,365 |
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.55 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
DeltaHub/lora_t5-base_mrpc
|
DeltaHub
| null | 4 | 1 |
transformers
| 0 | null | true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 273 |
Need to work with OpenDelta
```
from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()
```
|
DemangeJeremy/4-sentiments-with-flaubert
|
DemangeJeremy
|
flaubert
| 4 | 140 |
transformers
| 0 |
text-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentiments', 'text-classification', 'flaubert', 'french', 'flaubert-large']
| false | true | true | 1,245 |
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
Dev-DGT/food-dbert-multiling
|
Dev-DGT
|
distilbert
| 8 | 8 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 236 |
# Token classification for FOODs.
Detects foods in sentences.
Currently, only supports spanish. Multiple words foods are detected as one entity.
## To-do
- English support.
- Negation support.
- Quantity tags.
- Psychosocial tags.
|
Devrim/prism-default
|
Devrim
| null | 3 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,533 |
The default Prism model available at https://github.com/thompsonb/prism. See the [README.md](https://github.com/thompsonb/prism/blob/master/README.md) file for more information.
**LICENCE NOTICE**
```
MIT License
Copyright (c) Brian Thompson
Portions of this software are copied from fairseq (https://github.com/pytorch/fairseq),
which is released under the MIT License and Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
|
DiegoAlysson
|
marian
| 13 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wmt16']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,314 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2915
- Bleu: 27.9273
- Gen Len: 34.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
DingleyMaillotUrgell/homer-bot
|
DingleyMaillotUrgell
|
gpt2
| 10 | 5 |
transformers
| 0 |
conversational
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 1,711 |
# HomerBot: A conversational chatbot imitating Homer Simpson
This model is a fine-tuned [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium) (medium version) on Simpsons [scripts](https://www.kaggle.com/datasets/pierremegret/dialogue-lines-of-the-simpsons).
More specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K **(character utterance, Homer's response)** pairs
For more details, check out our git [repo](https://github.com/jesseDingley/HomerBot) containing all the code.
### How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("DingleyMaillotUrgell/homer-bot")
model = AutoModelForCausalLM.from_pretrained("DingleyMaillotUrgell/homer-bot")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User: ") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids,
max_length=1000,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# print last outpput tokens from bot
print("Homer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola
|
DongHyoungLee
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Dongjae/mrc2reader
|
Dongjae
|
xlm-roberta
| 7 | 6 |
transformers
| 0 |
question-answering
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 311 |
The Reader model is for Korean Question Answering
The backbone model is deepset/xlm-roberta-large-squad2.
It is a finetuned model with KorQuAD-v1 dataset.
As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score.
Thank you
|
Doogie/Wayne_NLP_mT5
|
Doogie
|
mt5
| 44 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 918 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wayne_NLP_mT5
This model was trained only english datasets.
if you want trained korean + english model
go to wayne_mulang_mT5.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Doogie/Waynehills_summary_tensorflow
|
Doogie
|
t5
| 7 | 1 |
transformers
| 0 |
text2text-generation
| false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 856 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Waynehills_summary_tensorflow
This model is a fine-tuned version of [KETI-AIR/ke-t5-base-ko](https://huggingface.co/KETI-AIR/ke-t5-base-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Doogie/wav2vec2-base-timit-demo-colab
|
Doogie
|
wav2vec2
| 20 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,640 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4180
- Wer: 0.3392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.656 | 4.0 | 500 | 1.8973 | 1.0130 |
| 0.8647 | 8.0 | 1000 | 0.4667 | 0.4705 |
| 0.2968 | 12.0 | 1500 | 0.4211 | 0.4035 |
| 0.1719 | 16.0 | 2000 | 0.4725 | 0.3739 |
| 0.1272 | 20.0 | 2500 | 0.4586 | 0.3543 |
| 0.1079 | 24.0 | 3000 | 0.4356 | 0.3484 |
| 0.0808 | 28.0 | 3500 | 0.4180 | 0.3392 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
DoyyingFace/doyying_bert_first_again
|
DoyyingFace
|
bert
| 8 | 1 |
transformers
| 0 |
text-classification
| false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,045 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_qubhe07
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DoyyingFace/dummy-model
|
DoyyingFace
|
camembert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 822 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7
|
DrishtiSharma
|
wav2vec2
| 19 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ab']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'ab', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 1,871 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5620
- Wer: 0.5651
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common_voice_7_0 --config ab --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6445 | 13.64 | 300 | 4.3963 | 1.0 |
| 3.6459 | 27.27 | 600 | 3.2267 | 1.0 |
| 3.0978 | 40.91 | 900 | 3.0927 | 1.0 |
| 2.8357 | 54.55 | 1200 | 2.1462 | 1.0029 |
| 1.2723 | 68.18 | 1500 | 0.6747 | 0.6996 |
| 0.6528 | 81.82 | 1800 | 0.5928 | 0.6422 |
| 0.4905 | 95.45 | 2100 | 0.5587 | 0.5681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4
|
DrishtiSharma
|
wav2vec2
| 18 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ab']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
| true | true | true | 1,433 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6178
- Wer: 0.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2793 | 27.27 | 300 | 3.0737 | 1.0 |
| 1.5348 | 54.55 | 600 | 0.6312 | 0.6334 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['as']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 3,861 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-g1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3327
- Wer: 0.5744
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 14.1958 | 5.26 | 100 | 7.1919 | 1.0 |
| 5.0035 | 10.51 | 200 | 3.9362 | 1.0 |
| 3.6193 | 15.77 | 300 | 3.4451 | 1.0 |
| 3.4852 | 21.05 | 400 | 3.3536 | 1.0 |
| 2.8489 | 26.31 | 500 | 1.6451 | 0.9100 |
| 0.9568 | 31.56 | 600 | 1.0514 | 0.7561 |
| 0.4865 | 36.82 | 700 | 1.0434 | 0.7184 |
| 0.322 | 42.1 | 800 | 1.0825 | 0.7210 |
| 0.2383 | 47.36 | 900 | 1.1304 | 0.6897 |
| 0.2136 | 52.62 | 1000 | 1.1150 | 0.6854 |
| 0.179 | 57.87 | 1100 | 1.2453 | 0.6875 |
| 0.1539 | 63.15 | 1200 | 1.2211 | 0.6704 |
| 0.1303 | 68.41 | 1300 | 1.2859 | 0.6747 |
| 0.1183 | 73.67 | 1400 | 1.2775 | 0.6721 |
| 0.0994 | 78.92 | 1500 | 1.2321 | 0.6404 |
| 0.0991 | 84.21 | 1600 | 1.2766 | 0.6524 |
| 0.0887 | 89.46 | 1700 | 1.3026 | 0.6344 |
| 0.0754 | 94.72 | 1800 | 1.3199 | 0.6704 |
| 0.0693 | 99.97 | 1900 | 1.3044 | 0.6361 |
| 0.0568 | 105.26 | 2000 | 1.3541 | 0.6254 |
| 0.0536 | 110.51 | 2100 | 1.3320 | 0.6249 |
| 0.0529 | 115.77 | 2200 | 1.3370 | 0.6271 |
| 0.048 | 121.05 | 2300 | 1.2757 | 0.6031 |
| 0.0419 | 126.31 | 2400 | 1.2661 | 0.6172 |
| 0.0349 | 131.56 | 2500 | 1.2897 | 0.6048 |
| 0.0309 | 136.82 | 2600 | 1.2688 | 0.5962 |
| 0.0278 | 142.1 | 2700 | 1.2885 | 0.5954 |
| 0.0254 | 147.36 | 2800 | 1.2988 | 0.5915 |
| 0.0223 | 152.62 | 2900 | 1.3153 | 0.5941 |
| 0.0216 | 157.87 | 3000 | 1.2936 | 0.5937 |
| 0.0186 | 163.15 | 3100 | 1.2906 | 0.5877 |
| 0.0156 | 168.41 | 3200 | 1.3476 | 0.5962 |
| 0.0158 | 173.67 | 3300 | 1.3363 | 0.5847 |
| 0.0142 | 178.92 | 3400 | 1.3367 | 0.5847 |
| 0.0153 | 184.21 | 3500 | 1.3105 | 0.5757 |
| 0.0119 | 189.46 | 3600 | 1.3255 | 0.5705 |
| 0.0115 | 194.72 | 3700 | 1.3340 | 0.5787 |
| 0.0103 | 199.97 | 3800 | 1.3327 | 0.5744 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['as']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,679 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-v9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
### Evaluation Command
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese (as) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2
|
DrishtiSharma
| null | 2 | 0 | null | 1 |
automatic-speech-recognition
| false | false | false |
apache-2.0
|
['as']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,511 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
### Note: Files are missing. Probably, didn't get (git)pushed properly. :(
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bas']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'bas', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,689 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bas-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5997
- Wer: 0.3870
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.7076 | 5.26 | 200 | 3.6361 | 1.0 |
| 3.1657 | 10.52 | 400 | 3.0101 | 1.0 |
| 2.3987 | 15.78 | 600 | 0.9125 | 0.6774 |
| 1.0079 | 21.05 | 800 | 0.6477 | 0.5352 |
| 0.7392 | 26.31 | 1000 | 0.5432 | 0.4929 |
| 0.6114 | 31.57 | 1200 | 0.5498 | 0.4639 |
| 0.5222 | 36.83 | 1400 | 0.5220 | 0.4561 |
| 0.4648 | 42.1 | 1600 | 0.5586 | 0.4289 |
| 0.4103 | 47.36 | 1800 | 0.5337 | 0.4082 |
| 0.3692 | 52.62 | 2000 | 0.5421 | 0.3861 |
| 0.3403 | 57.88 | 2200 | 0.5549 | 0.4096 |
| 0.3011 | 63.16 | 2400 | 0.5833 | 0.3925 |
| 0.2932 | 68.42 | 2600 | 0.5674 | 0.3815 |
| 0.2696 | 73.68 | 2800 | 0.5734 | 0.3889 |
| 0.2496 | 78.94 | 3000 | 0.5968 | 0.3985 |
| 0.2289 | 84.21 | 3200 | 0.5888 | 0.3893 |
| 0.2091 | 89.47 | 3400 | 0.5849 | 0.3852 |
| 0.2005 | 94.73 | 3600 | 0.5938 | 0.3875 |
| 0.1876 | 99.99 | 3800 | 0.5997 | 0.3870 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2
|
DrishtiSharma
|
wav2vec2
| 12 | 11 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bg']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'bg', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 2,860 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bg-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
- Wer: 0.2860
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.8791 | 1.74 | 200 | 3.1902 | 1.0 |
| 3.0441 | 3.48 | 400 | 2.8098 | 0.9864 |
| 1.1499 | 5.22 | 600 | 0.4668 | 0.5014 |
| 0.4968 | 6.96 | 800 | 0.4162 | 0.4472 |
| 0.3553 | 8.7 | 1000 | 0.3580 | 0.3777 |
| 0.3027 | 10.43 | 1200 | 0.3422 | 0.3506 |
| 0.2562 | 12.17 | 1400 | 0.3556 | 0.3639 |
| 0.2272 | 13.91 | 1600 | 0.3621 | 0.3583 |
| 0.2125 | 15.65 | 1800 | 0.3436 | 0.3358 |
| 0.1904 | 17.39 | 2000 | 0.3650 | 0.3545 |
| 0.1695 | 19.13 | 2200 | 0.3366 | 0.3241 |
| 0.1532 | 20.87 | 2400 | 0.3550 | 0.3311 |
| 0.1453 | 22.61 | 2600 | 0.3582 | 0.3131 |
| 0.1359 | 24.35 | 2800 | 0.3524 | 0.3084 |
| 0.1233 | 26.09 | 3000 | 0.3503 | 0.2973 |
| 0.1114 | 27.83 | 3200 | 0.3434 | 0.2946 |
| 0.1051 | 29.57 | 3400 | 0.3474 | 0.2956 |
| 0.0965 | 31.3 | 3600 | 0.3426 | 0.2907 |
| 0.0923 | 33.04 | 3800 | 0.3478 | 0.2894 |
| 0.0894 | 34.78 | 4000 | 0.3421 | 0.2860 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1
|
DrishtiSharma
|
wav2vec2
| 19 | 6 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bg']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'bg', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 2,715 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5197
- Wer: 0.4689
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3711 | 2.61 | 300 | 4.3122 | 1.0 |
| 3.1653 | 5.22 | 600 | 3.1156 | 1.0 |
| 2.8904 | 7.83 | 900 | 2.8421 | 0.9918 |
| 0.9207 | 10.43 | 1200 | 0.9895 | 0.8689 |
| 0.6384 | 13.04 | 1500 | 0.6994 | 0.7700 |
| 0.5215 | 15.65 | 1800 | 0.5628 | 0.6443 |
| 0.4573 | 18.26 | 2100 | 0.5316 | 0.6174 |
| 0.3875 | 20.87 | 2400 | 0.4932 | 0.5779 |
| 0.3562 | 23.48 | 2700 | 0.4972 | 0.5475 |
| 0.3218 | 26.09 | 3000 | 0.4895 | 0.5219 |
| 0.2954 | 28.7 | 3300 | 0.5226 | 0.5192 |
| 0.287 | 31.3 | 3600 | 0.4957 | 0.5146 |
| 0.2587 | 33.91 | 3900 | 0.4944 | 0.4893 |
| 0.2496 | 36.52 | 4200 | 0.4976 | 0.4895 |
| 0.2365 | 39.13 | 4500 | 0.5185 | 0.4819 |
| 0.2264 | 41.74 | 4800 | 0.5152 | 0.4776 |
| 0.2224 | 44.35 | 5100 | 0.5031 | 0.4746 |
| 0.2096 | 46.96 | 5400 | 0.5062 | 0.4708 |
| 0.2038 | 49.57 | 5700 | 0.5217 | 0.4698 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['br']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 5,981 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d10
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1382
- Wer: 0.4895
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13.611 | 0.68 | 100 | 5.8492 | 1.0 |
| 3.8176 | 1.35 | 200 | 3.2181 | 1.0 |
| 3.0457 | 2.03 | 300 | 3.0902 | 1.0 |
| 2.2632 | 2.7 | 400 | 1.4882 | 0.9426 |
| 1.1965 | 3.38 | 500 | 1.1396 | 0.7950 |
| 0.984 | 4.05 | 600 | 1.0216 | 0.7583 |
| 0.8036 | 4.73 | 700 | 1.0258 | 0.7202 |
| 0.7061 | 5.41 | 800 | 0.9710 | 0.6820 |
| 0.689 | 6.08 | 900 | 0.9731 | 0.6488 |
| 0.6063 | 6.76 | 1000 | 0.9442 | 0.6569 |
| 0.5215 | 7.43 | 1100 | 1.0221 | 0.6671 |
| 0.4965 | 8.11 | 1200 | 0.9266 | 0.6181 |
| 0.4321 | 8.78 | 1300 | 0.9050 | 0.5991 |
| 0.3762 | 9.46 | 1400 | 0.9801 | 0.6134 |
| 0.3747 | 10.14 | 1500 | 0.9210 | 0.5747 |
| 0.3554 | 10.81 | 1600 | 0.9720 | 0.6051 |
| 0.3148 | 11.49 | 1700 | 0.9672 | 0.6099 |
| 0.3176 | 12.16 | 1800 | 1.0120 | 0.5966 |
| 0.2915 | 12.84 | 1900 | 0.9490 | 0.5653 |
| 0.2696 | 13.51 | 2000 | 0.9394 | 0.5819 |
| 0.2569 | 14.19 | 2100 | 1.0197 | 0.5667 |
| 0.2395 | 14.86 | 2200 | 0.9771 | 0.5608 |
| 0.2367 | 15.54 | 2300 | 1.0516 | 0.5678 |
| 0.2153 | 16.22 | 2400 | 1.0097 | 0.5679 |
| 0.2092 | 16.89 | 2500 | 1.0143 | 0.5430 |
| 0.2046 | 17.57 | 2600 | 1.0884 | 0.5631 |
| 0.1937 | 18.24 | 2700 | 1.0113 | 0.5648 |
| 0.1752 | 18.92 | 2800 | 1.0056 | 0.5470 |
| 0.164 | 19.59 | 2900 | 1.0340 | 0.5508 |
| 0.1723 | 20.27 | 3000 | 1.0743 | 0.5615 |
| 0.1535 | 20.95 | 3100 | 1.0495 | 0.5465 |
| 0.1432 | 21.62 | 3200 | 1.0390 | 0.5333 |
| 0.1561 | 22.3 | 3300 | 1.0798 | 0.5590 |
| 0.1384 | 22.97 | 3400 | 1.1716 | 0.5449 |
| 0.1359 | 23.65 | 3500 | 1.1154 | 0.5420 |
| 0.1356 | 24.32 | 3600 | 1.0883 | 0.5387 |
| 0.1355 | 25.0 | 3700 | 1.1114 | 0.5504 |
| 0.1158 | 25.68 | 3800 | 1.1171 | 0.5388 |
| 0.1166 | 26.35 | 3900 | 1.1335 | 0.5403 |
| 0.1165 | 27.03 | 4000 | 1.1374 | 0.5248 |
| 0.1064 | 27.7 | 4100 | 1.0336 | 0.5298 |
| 0.0987 | 28.38 | 4200 | 1.0407 | 0.5216 |
| 0.104 | 29.05 | 4300 | 1.1012 | 0.5350 |
| 0.0894 | 29.73 | 4400 | 1.1016 | 0.5310 |
| 0.0912 | 30.41 | 4500 | 1.1383 | 0.5302 |
| 0.0972 | 31.08 | 4600 | 1.0851 | 0.5214 |
| 0.0832 | 31.76 | 4700 | 1.1705 | 0.5311 |
| 0.0859 | 32.43 | 4800 | 1.0750 | 0.5192 |
| 0.0811 | 33.11 | 4900 | 1.0900 | 0.5180 |
| 0.0825 | 33.78 | 5000 | 1.1271 | 0.5196 |
| 0.07 | 34.46 | 5100 | 1.1289 | 0.5141 |
| 0.0689 | 35.14 | 5200 | 1.0960 | 0.5101 |
| 0.068 | 35.81 | 5300 | 1.1377 | 0.5050 |
| 0.0776 | 36.49 | 5400 | 1.0880 | 0.5194 |
| 0.0642 | 37.16 | 5500 | 1.1027 | 0.5076 |
| 0.0607 | 37.84 | 5600 | 1.1293 | 0.5119 |
| 0.0607 | 38.51 | 5700 | 1.1229 | 0.5103 |
| 0.0545 | 39.19 | 5800 | 1.1168 | 0.5103 |
| 0.0562 | 39.86 | 5900 | 1.1206 | 0.5073 |
| 0.0484 | 40.54 | 6000 | 1.1710 | 0.5019 |
| 0.0499 | 41.22 | 6100 | 1.1511 | 0.5100 |
| 0.0455 | 41.89 | 6200 | 1.1488 | 0.5009 |
| 0.0475 | 42.57 | 6300 | 1.1196 | 0.4944 |
| 0.0413 | 43.24 | 6400 | 1.1654 | 0.4996 |
| 0.0389 | 43.92 | 6500 | 1.0961 | 0.4930 |
| 0.0428 | 44.59 | 6600 | 1.0955 | 0.4938 |
| 0.039 | 45.27 | 6700 | 1.1323 | 0.4955 |
| 0.0352 | 45.95 | 6800 | 1.1040 | 0.4930 |
| 0.0334 | 46.62 | 6900 | 1.1382 | 0.4942 |
| 0.0338 | 47.3 | 7000 | 1.1264 | 0.4911 |
| 0.0307 | 47.97 | 7100 | 1.1216 | 0.4881 |
| 0.0286 | 48.65 | 7200 | 1.1459 | 0.4894 |
| 0.0348 | 49.32 | 7300 | 1.1419 | 0.4906 |
| 0.0329 | 50.0 | 7400 | 1.1382 | 0.4895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2
|
DrishtiSharma
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['br']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 5,978 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1257
- Wer: 0.4631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00034
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.0379 | 0.68 | 100 | 5.6808 | 1.0 |
| 3.9145 | 1.35 | 200 | 3.1970 | 1.0 |
| 3.0293 | 2.03 | 300 | 2.9513 | 1.0 |
| 2.0927 | 2.7 | 400 | 1.4545 | 0.8887 |
| 1.1556 | 3.38 | 500 | 1.0966 | 0.7564 |
| 0.9628 | 4.05 | 600 | 0.9808 | 0.7364 |
| 0.7869 | 4.73 | 700 | 1.0488 | 0.7355 |
| 0.703 | 5.41 | 800 | 0.9500 | 0.6881 |
| 0.6657 | 6.08 | 900 | 0.9309 | 0.6259 |
| 0.5663 | 6.76 | 1000 | 0.9133 | 0.6357 |
| 0.496 | 7.43 | 1100 | 0.9890 | 0.6028 |
| 0.4748 | 8.11 | 1200 | 0.9469 | 0.5894 |
| 0.4135 | 8.78 | 1300 | 0.9270 | 0.6045 |
| 0.3579 | 9.46 | 1400 | 0.8818 | 0.5708 |
| 0.353 | 10.14 | 1500 | 0.9244 | 0.5781 |
| 0.334 | 10.81 | 1600 | 0.9009 | 0.5638 |
| 0.2917 | 11.49 | 1700 | 1.0132 | 0.5828 |
| 0.29 | 12.16 | 1800 | 0.9696 | 0.5668 |
| 0.2691 | 12.84 | 1900 | 0.9811 | 0.5455 |
| 0.25 | 13.51 | 2000 | 0.9951 | 0.5624 |
| 0.2467 | 14.19 | 2100 | 0.9653 | 0.5573 |
| 0.2242 | 14.86 | 2200 | 0.9714 | 0.5378 |
| 0.2066 | 15.54 | 2300 | 0.9829 | 0.5394 |
| 0.2075 | 16.22 | 2400 | 1.0547 | 0.5520 |
| 0.1923 | 16.89 | 2500 | 1.0014 | 0.5397 |
| 0.1919 | 17.57 | 2600 | 0.9978 | 0.5477 |
| 0.1908 | 18.24 | 2700 | 1.1064 | 0.5397 |
| 0.157 | 18.92 | 2800 | 1.0629 | 0.5238 |
| 0.159 | 19.59 | 2900 | 1.0642 | 0.5321 |
| 0.1652 | 20.27 | 3000 | 1.0207 | 0.5328 |
| 0.141 | 20.95 | 3100 | 0.9948 | 0.5312 |
| 0.1417 | 21.62 | 3200 | 1.0338 | 0.5328 |
| 0.1514 | 22.3 | 3300 | 1.0513 | 0.5313 |
| 0.1365 | 22.97 | 3400 | 1.0357 | 0.5291 |
| 0.1319 | 23.65 | 3500 | 1.0587 | 0.5167 |
| 0.1298 | 24.32 | 3600 | 1.0636 | 0.5236 |
| 0.1245 | 25.0 | 3700 | 1.1367 | 0.5280 |
| 0.1114 | 25.68 | 3800 | 1.0633 | 0.5200 |
| 0.1088 | 26.35 | 3900 | 1.0495 | 0.5210 |
| 0.1175 | 27.03 | 4000 | 1.0897 | 0.5095 |
| 0.1043 | 27.7 | 4100 | 1.0580 | 0.5309 |
| 0.0951 | 28.38 | 4200 | 1.0448 | 0.5067 |
| 0.1011 | 29.05 | 4300 | 1.0665 | 0.5137 |
| 0.0889 | 29.73 | 4400 | 1.0579 | 0.5026 |
| 0.0833 | 30.41 | 4500 | 1.0740 | 0.5037 |
| 0.0889 | 31.08 | 4600 | 1.0933 | 0.5083 |
| 0.0784 | 31.76 | 4700 | 1.0715 | 0.5089 |
| 0.0767 | 32.43 | 4800 | 1.0658 | 0.5049 |
| 0.0769 | 33.11 | 4900 | 1.1118 | 0.4979 |
| 0.0722 | 33.78 | 5000 | 1.1413 | 0.4986 |
| 0.0709 | 34.46 | 5100 | 1.0706 | 0.4885 |
| 0.0664 | 35.14 | 5200 | 1.1217 | 0.4884 |
| 0.0648 | 35.81 | 5300 | 1.1298 | 0.4941 |
| 0.0657 | 36.49 | 5400 | 1.1330 | 0.4920 |
| 0.0582 | 37.16 | 5500 | 1.0598 | 0.4835 |
| 0.0602 | 37.84 | 5600 | 1.1097 | 0.4943 |
| 0.0598 | 38.51 | 5700 | 1.0976 | 0.4876 |
| 0.0547 | 39.19 | 5800 | 1.0734 | 0.4825 |
| 0.0561 | 39.86 | 5900 | 1.0926 | 0.4850 |
| 0.0516 | 40.54 | 6000 | 1.1579 | 0.4751 |
| 0.0478 | 41.22 | 6100 | 1.1384 | 0.4706 |
| 0.0396 | 41.89 | 6200 | 1.1462 | 0.4739 |
| 0.0472 | 42.57 | 6300 | 1.1277 | 0.4732 |
| 0.0447 | 43.24 | 6400 | 1.1517 | 0.4752 |
| 0.0423 | 43.92 | 6500 | 1.1219 | 0.4784 |
| 0.0426 | 44.59 | 6600 | 1.1311 | 0.4724 |
| 0.0391 | 45.27 | 6700 | 1.1135 | 0.4692 |
| 0.0362 | 45.95 | 6800 | 1.0878 | 0.4645 |
| 0.0329 | 46.62 | 6900 | 1.1137 | 0.4668 |
| 0.0356 | 47.3 | 7000 | 1.1233 | 0.4687 |
| 0.0328 | 47.97 | 7100 | 1.1238 | 0.4653 |
| 0.0323 | 48.65 | 7200 | 1.1307 | 0.4646 |
| 0.0325 | 49.32 | 7300 | 1.1242 | 0.4645 |
| 0.03 | 50.0 | 7400 | 1.1257 | 0.4631 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1
|
DrishtiSharma
|
wav2vec2
| 13 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['gn']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gn', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 2,932 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gn-k1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9220
- Wer: 0.6631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common_voice_8_0 --config gn --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 15.9402 | 8.32 | 100 | 6.9185 | 1.0 |
| 4.6367 | 16.64 | 200 | 3.7416 | 1.0 |
| 3.4337 | 24.96 | 300 | 3.2581 | 1.0 |
| 3.2307 | 33.32 | 400 | 2.8008 | 1.0 |
| 1.3182 | 41.64 | 500 | 0.8359 | 0.8171 |
| 0.409 | 49.96 | 600 | 0.8470 | 0.8323 |
| 0.2573 | 58.32 | 700 | 0.7823 | 0.7576 |
| 0.1969 | 66.64 | 800 | 0.8306 | 0.7424 |
| 0.1469 | 74.96 | 900 | 0.9225 | 0.7713 |
| 0.1172 | 83.32 | 1000 | 0.7903 | 0.6951 |
| 0.1017 | 91.64 | 1100 | 0.8519 | 0.6921 |
| 0.0851 | 99.96 | 1200 | 0.8129 | 0.6646 |
| 0.071 | 108.32 | 1300 | 0.8614 | 0.7043 |
| 0.061 | 116.64 | 1400 | 0.8414 | 0.6921 |
| 0.0552 | 124.96 | 1500 | 0.8649 | 0.6905 |
| 0.0465 | 133.32 | 1600 | 0.8575 | 0.6646 |
| 0.0381 | 141.64 | 1700 | 0.8802 | 0.6723 |
| 0.0338 | 149.96 | 1800 | 0.8731 | 0.6845 |
| 0.0306 | 158.32 | 1900 | 0.9003 | 0.6585 |
| 0.0236 | 166.64 | 2000 | 0.9408 | 0.6616 |
| 0.021 | 174.96 | 2100 | 0.9353 | 0.6723 |
| 0.0212 | 183.32 | 2200 | 0.9269 | 0.6570 |
| 0.0191 | 191.64 | 2300 | 0.9277 | 0.6662 |
| 0.0161 | 199.96 | 2400 | 0.9220 | 0.6631 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_7_0']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'hi', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 4,085 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6588
- Wer: 0.2987
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
#
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.809 | 1.36 | 200 | 6.2066 | 1.0 |
| 4.3402 | 2.72 | 400 | 3.5184 | 1.0 |
| 3.4365 | 4.08 | 600 | 3.2779 | 1.0 |
| 1.8643 | 5.44 | 800 | 0.9875 | 0.6270 |
| 0.7504 | 6.8 | 1000 | 0.6382 | 0.4666 |
| 0.5328 | 8.16 | 1200 | 0.6075 | 0.4505 |
| 0.4364 | 9.52 | 1400 | 0.5785 | 0.4215 |
| 0.3777 | 10.88 | 1600 | 0.6279 | 0.4227 |
| 0.3374 | 12.24 | 1800 | 0.6536 | 0.4192 |
| 0.3236 | 13.6 | 2000 | 0.5911 | 0.4047 |
| 0.2877 | 14.96 | 2200 | 0.5955 | 0.4097 |
| 0.2643 | 16.33 | 2400 | 0.5923 | 0.3744 |
| 0.2421 | 17.68 | 2600 | 0.6307 | 0.3814 |
| 0.2218 | 19.05 | 2800 | 0.6036 | 0.3764 |
| 0.2046 | 20.41 | 3000 | 0.6286 | 0.3797 |
| 0.191 | 21.77 | 3200 | 0.6517 | 0.3889 |
| 0.1856 | 23.13 | 3400 | 0.6193 | 0.3661 |
| 0.1721 | 24.49 | 3600 | 0.7034 | 0.3727 |
| 0.1656 | 25.85 | 3800 | 0.6293 | 0.3591 |
| 0.1532 | 27.21 | 4000 | 0.6075 | 0.3611 |
| 0.1507 | 28.57 | 4200 | 0.6313 | 0.3565 |
| 0.1381 | 29.93 | 4400 | 0.6564 | 0.3578 |
| 0.1359 | 31.29 | 4600 | 0.6724 | 0.3543 |
| 0.1248 | 32.65 | 4800 | 0.6789 | 0.3512 |
| 0.1198 | 34.01 | 5000 | 0.6442 | 0.3539 |
| 0.1125 | 35.37 | 5200 | 0.6676 | 0.3419 |
| 0.1036 | 36.73 | 5400 | 0.7017 | 0.3435 |
| 0.0982 | 38.09 | 5600 | 0.6828 | 0.3319 |
| 0.0971 | 39.45 | 5800 | 0.6112 | 0.3351 |
| 0.0968 | 40.81 | 6000 | 0.6424 | 0.3252 |
| 0.0893 | 42.18 | 6200 | 0.6707 | 0.3304 |
| 0.0878 | 43.54 | 6400 | 0.6432 | 0.3236 |
| 0.0827 | 44.89 | 6600 | 0.6696 | 0.3240 |
| 0.0788 | 46.26 | 6800 | 0.6564 | 0.3180 |
| 0.0753 | 47.62 | 7000 | 0.6574 | 0.3130 |
| 0.0674 | 48.98 | 7200 | 0.6698 | 0.3175 |
| 0.0676 | 50.34 | 7400 | 0.6441 | 0.3142 |
| 0.0626 | 51.7 | 7600 | 0.6642 | 0.3121 |
| 0.0617 | 53.06 | 7800 | 0.6615 | 0.3117 |
| 0.0599 | 54.42 | 8000 | 0.6634 | 0.3059 |
| 0.0538 | 55.78 | 8200 | 0.6464 | 0.3033 |
| 0.0571 | 57.14 | 8400 | 0.6503 | 0.3018 |
| 0.0491 | 58.5 | 8600 | 0.6625 | 0.3025 |
| 0.0511 | 59.86 | 8800 | 0.6588 | 0.2987 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2
|
DrishtiSharma
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 3,526 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-cv8-b2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7322
- Wer: 0.3469
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Hindi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6226 | 1.04 | 200 | 3.8855 | 1.0 |
| 3.4678 | 2.07 | 400 | 3.4283 | 1.0 |
| 2.3668 | 3.11 | 600 | 1.0743 | 0.7175 |
| 0.7308 | 4.15 | 800 | 0.7663 | 0.5498 |
| 0.4985 | 5.18 | 1000 | 0.6957 | 0.5001 |
| 0.3817 | 6.22 | 1200 | 0.6932 | 0.4866 |
| 0.3281 | 7.25 | 1400 | 0.7034 | 0.4983 |
| 0.2752 | 8.29 | 1600 | 0.6588 | 0.4606 |
| 0.2475 | 9.33 | 1800 | 0.6514 | 0.4328 |
| 0.219 | 10.36 | 2000 | 0.6396 | 0.4176 |
| 0.2036 | 11.4 | 2200 | 0.6867 | 0.4162 |
| 0.1793 | 12.44 | 2400 | 0.6943 | 0.4196 |
| 0.1724 | 13.47 | 2600 | 0.6862 | 0.4260 |
| 0.1554 | 14.51 | 2800 | 0.7615 | 0.4222 |
| 0.151 | 15.54 | 3000 | 0.7058 | 0.4110 |
| 0.1335 | 16.58 | 3200 | 0.7172 | 0.3986 |
| 0.1326 | 17.62 | 3400 | 0.7182 | 0.3923 |
| 0.1225 | 18.65 | 3600 | 0.6995 | 0.3910 |
| 0.1146 | 19.69 | 3800 | 0.7075 | 0.3875 |
| 0.108 | 20.73 | 4000 | 0.7297 | 0.3858 |
| 0.1048 | 21.76 | 4200 | 0.7413 | 0.3850 |
| 0.0979 | 22.8 | 4400 | 0.7452 | 0.3793 |
| 0.0946 | 23.83 | 4600 | 0.7436 | 0.3759 |
| 0.0897 | 24.87 | 4800 | 0.7289 | 0.3754 |
| 0.0854 | 25.91 | 5000 | 0.7271 | 0.3667 |
| 0.0803 | 26.94 | 5200 | 0.7378 | 0.3656 |
| 0.0752 | 27.98 | 5400 | 0.7488 | 0.3680 |
| 0.0718 | 29.02 | 5600 | 0.7185 | 0.3619 |
| 0.0702 | 30.05 | 5800 | 0.7428 | 0.3554 |
| 0.0653 | 31.09 | 6000 | 0.7447 | 0.3559 |
| 0.0638 | 32.12 | 6200 | 0.7327 | 0.3523 |
| 0.058 | 33.16 | 6400 | 0.7339 | 0.3488 |
| 0.0594 | 34.2 | 6600 | 0.7322 | 0.3469 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hi', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 4,594 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6510
- Wer: 0.3179
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev_data --config hi --split validation --chunk_length_s 10 --stride_length_s 1
Note: Hindi language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.5576 | 1.04 | 200 | 6.6594 | 1.0 |
| 4.4069 | 2.07 | 400 | 3.6011 | 1.0 |
| 3.4273 | 3.11 | 600 | 3.3370 | 1.0 |
| 2.1108 | 4.15 | 800 | 1.0641 | 0.6562 |
| 0.8817 | 5.18 | 1000 | 0.7178 | 0.5172 |
| 0.6508 | 6.22 | 1200 | 0.6612 | 0.4839 |
| 0.5524 | 7.25 | 1400 | 0.6458 | 0.4889 |
| 0.4992 | 8.29 | 1600 | 0.5791 | 0.4382 |
| 0.4669 | 9.33 | 1800 | 0.6039 | 0.4352 |
| 0.4441 | 10.36 | 2000 | 0.6276 | 0.4297 |
| 0.4172 | 11.4 | 2200 | 0.6183 | 0.4474 |
| 0.3872 | 12.44 | 2400 | 0.5886 | 0.4231 |
| 0.3692 | 13.47 | 2600 | 0.6448 | 0.4399 |
| 0.3385 | 14.51 | 2800 | 0.6344 | 0.4075 |
| 0.3246 | 15.54 | 3000 | 0.5896 | 0.4087 |
| 0.3026 | 16.58 | 3200 | 0.6158 | 0.4016 |
| 0.284 | 17.62 | 3400 | 0.6038 | 0.3906 |
| 0.2682 | 18.65 | 3600 | 0.6165 | 0.3900 |
| 0.2577 | 19.69 | 3800 | 0.5754 | 0.3805 |
| 0.2509 | 20.73 | 4000 | 0.6028 | 0.3925 |
| 0.2426 | 21.76 | 4200 | 0.6335 | 0.4138 |
| 0.2346 | 22.8 | 4400 | 0.6128 | 0.3870 |
| 0.2205 | 23.83 | 4600 | 0.6223 | 0.3831 |
| 0.2104 | 24.87 | 4800 | 0.6122 | 0.3781 |
| 0.1992 | 25.91 | 5000 | 0.6467 | 0.3792 |
| 0.1916 | 26.94 | 5200 | 0.6277 | 0.3636 |
| 0.1835 | 27.98 | 5400 | 0.6317 | 0.3773 |
| 0.1776 | 29.02 | 5600 | 0.6124 | 0.3614 |
| 0.1751 | 30.05 | 5800 | 0.6475 | 0.3628 |
| 0.1662 | 31.09 | 6000 | 0.6266 | 0.3504 |
| 0.1584 | 32.12 | 6200 | 0.6347 | 0.3532 |
| 0.1494 | 33.16 | 6400 | 0.6636 | 0.3491 |
| 0.1457 | 34.2 | 6600 | 0.6334 | 0.3507 |
| 0.1427 | 35.23 | 6800 | 0.6397 | 0.3442 |
| 0.1397 | 36.27 | 7000 | 0.6468 | 0.3496 |
| 0.1283 | 37.31 | 7200 | 0.6291 | 0.3416 |
| 0.1255 | 38.34 | 7400 | 0.6652 | 0.3461 |
| 0.1195 | 39.38 | 7600 | 0.6587 | 0.3342 |
| 0.1169 | 40.41 | 7800 | 0.6478 | 0.3319 |
| 0.1126 | 41.45 | 8000 | 0.6280 | 0.3291 |
| 0.1112 | 42.49 | 8200 | 0.6434 | 0.3290 |
| 0.1069 | 43.52 | 8400 | 0.6542 | 0.3268 |
| 0.1027 | 44.56 | 8600 | 0.6536 | 0.3239 |
| 0.0993 | 45.6 | 8800 | 0.6622 | 0.3257 |
| 0.0973 | 46.63 | 9000 | 0.6572 | 0.3192 |
| 0.0911 | 47.67 | 9200 | 0.6522 | 0.3175 |
| 0.0897 | 48.7 | 9400 | 0.6521 | 0.3200 |
| 0.0905 | 49.74 | 9600 | 0.6510 | 0.3179 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'hi', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 3,697 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-d3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7988
- Wer: 0.3713
###Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Hindi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000388
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.2826 | 1.36 | 200 | 3.5253 | 1.0 |
| 2.7019 | 2.72 | 400 | 1.1744 | 0.7360 |
| 0.7358 | 4.08 | 600 | 0.7781 | 0.5501 |
| 0.4942 | 5.44 | 800 | 0.7590 | 0.5345 |
| 0.4056 | 6.8 | 1000 | 0.6885 | 0.4776 |
| 0.3243 | 8.16 | 1200 | 0.7195 | 0.4861 |
| 0.2785 | 9.52 | 1400 | 0.7473 | 0.4930 |
| 0.2448 | 10.88 | 1600 | 0.7201 | 0.4574 |
| 0.2155 | 12.24 | 1800 | 0.7686 | 0.4648 |
| 0.2039 | 13.6 | 2000 | 0.7440 | 0.4624 |
| 0.1792 | 14.96 | 2200 | 0.7815 | 0.4658 |
| 0.1695 | 16.33 | 2400 | 0.7678 | 0.4557 |
| 0.1598 | 17.68 | 2600 | 0.7468 | 0.4393 |
| 0.1568 | 19.05 | 2800 | 0.7440 | 0.4422 |
| 0.1391 | 20.41 | 3000 | 0.7656 | 0.4317 |
| 0.1283 | 21.77 | 3200 | 0.7892 | 0.4299 |
| 0.1194 | 23.13 | 3400 | 0.7646 | 0.4192 |
| 0.1116 | 24.49 | 3600 | 0.8156 | 0.4330 |
| 0.1111 | 25.85 | 3800 | 0.7661 | 0.4322 |
| 0.1023 | 27.21 | 4000 | 0.7419 | 0.4276 |
| 0.1007 | 28.57 | 4200 | 0.8488 | 0.4245 |
| 0.0925 | 29.93 | 4400 | 0.8062 | 0.4070 |
| 0.0918 | 31.29 | 4600 | 0.8412 | 0.4218 |
| 0.0813 | 32.65 | 4800 | 0.8045 | 0.4087 |
| 0.0805 | 34.01 | 5000 | 0.8411 | 0.4113 |
| 0.0774 | 35.37 | 5200 | 0.7664 | 0.3943 |
| 0.0666 | 36.73 | 5400 | 0.8082 | 0.3939 |
| 0.0655 | 38.09 | 5600 | 0.7948 | 0.4000 |
| 0.0617 | 39.45 | 5800 | 0.8084 | 0.3932 |
| 0.0606 | 40.81 | 6000 | 0.8223 | 0.3841 |
| 0.0569 | 42.18 | 6200 | 0.7892 | 0.3832 |
| 0.0544 | 43.54 | 6400 | 0.8326 | 0.3834 |
| 0.0508 | 44.89 | 6600 | 0.7952 | 0.3774 |
| 0.0492 | 46.26 | 6800 | 0.7923 | 0.3756 |
| 0.0459 | 47.62 | 7000 | 0.7925 | 0.3701 |
| 0.0423 | 48.98 | 7200 | 0.7988 | 0.3713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1
|
DrishtiSharma
|
wav2vec2
| 16 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_7_0']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
| true | true | true | 3,623 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-wx1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 -HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6552
- Wer: 0.3200
Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.2663 | 1.36 | 200 | 5.9245 | 1.0 |
| 4.1856 | 2.72 | 400 | 3.4968 | 1.0 |
| 3.3908 | 4.08 | 600 | 2.9970 | 1.0 |
| 1.5444 | 5.44 | 800 | 0.9071 | 0.6139 |
| 0.7237 | 6.8 | 1000 | 0.6508 | 0.4862 |
| 0.5323 | 8.16 | 1200 | 0.6217 | 0.4647 |
| 0.4426 | 9.52 | 1400 | 0.5785 | 0.4288 |
| 0.3933 | 10.88 | 1600 | 0.5935 | 0.4217 |
| 0.3532 | 12.24 | 1800 | 0.6358 | 0.4465 |
| 0.3319 | 13.6 | 2000 | 0.5789 | 0.4118 |
| 0.2877 | 14.96 | 2200 | 0.6163 | 0.4056 |
| 0.2663 | 16.33 | 2400 | 0.6176 | 0.3893 |
| 0.2511 | 17.68 | 2600 | 0.6065 | 0.3999 |
| 0.2275 | 19.05 | 2800 | 0.6183 | 0.3842 |
| 0.2098 | 20.41 | 3000 | 0.6486 | 0.3864 |
| 0.1943 | 21.77 | 3200 | 0.6365 | 0.3885 |
| 0.1877 | 23.13 | 3400 | 0.6013 | 0.3677 |
| 0.1679 | 24.49 | 3600 | 0.6451 | 0.3795 |
| 0.1667 | 25.85 | 3800 | 0.6410 | 0.3635 |
| 0.1514 | 27.21 | 4000 | 0.6000 | 0.3577 |
| 0.1453 | 28.57 | 4200 | 0.6020 | 0.3518 |
| 0.134 | 29.93 | 4400 | 0.6531 | 0.3517 |
| 0.1354 | 31.29 | 4600 | 0.6874 | 0.3578 |
| 0.1224 | 32.65 | 4800 | 0.6519 | 0.3492 |
| 0.1199 | 34.01 | 5000 | 0.6553 | 0.3490 |
| 0.1077 | 35.37 | 5200 | 0.6621 | 0.3429 |
| 0.0997 | 36.73 | 5400 | 0.6641 | 0.3413 |
| 0.0964 | 38.09 | 5600 | 0.6722 | 0.3385 |
| 0.0931 | 39.45 | 5800 | 0.6365 | 0.3363 |
| 0.0944 | 40.81 | 6000 | 0.6454 | 0.3326 |
| 0.0862 | 42.18 | 6200 | 0.6497 | 0.3256 |
| 0.0848 | 43.54 | 6400 | 0.6599 | 0.3226 |
| 0.0793 | 44.89 | 6600 | 0.6625 | 0.3232 |
| 0.076 | 46.26 | 6800 | 0.6463 | 0.3186 |
| 0.0749 | 47.62 | 7000 | 0.6559 | 0.3225 |
| 0.0663 | 48.98 | 7200 | 0.6552 | 0.3200 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1
|
DrishtiSharma
|
wav2vec2
| 16 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hsb']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hsb', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,450 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Wer: 0.4402
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.972 | 3.23 | 100 | 3.7498 | 1.0 |
| 3.3401 | 6.45 | 200 | 3.2320 | 1.0 |
| 3.2046 | 9.68 | 300 | 3.1741 | 0.9806 |
| 2.4031 | 12.9 | 400 | 1.0579 | 0.8996 |
| 1.0427 | 16.13 | 500 | 0.7989 | 0.7557 |
| 0.741 | 19.35 | 600 | 0.6405 | 0.6299 |
| 0.5699 | 22.58 | 700 | 0.6129 | 0.5928 |
| 0.4607 | 25.81 | 800 | 0.6548 | 0.5695 |
| 0.3827 | 29.03 | 900 | 0.6268 | 0.5190 |
| 0.3282 | 32.26 | 1000 | 0.5919 | 0.5016 |
| 0.2764 | 35.48 | 1100 | 0.5953 | 0.4805 |
| 0.2335 | 38.71 | 1200 | 0.5717 | 0.4728 |
| 0.2106 | 41.94 | 1300 | 0.5674 | 0.4569 |
| 0.1859 | 45.16 | 1400 | 0.5685 | 0.4502 |
| 0.1592 | 48.39 | 1500 | 0.5684 | 0.4402 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hsb']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hsb', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,442 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5328
- Wer: 0.4596
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.5979 | 3.23 | 100 | 3.5602 | 1.0 |
| 3.303 | 6.45 | 200 | 3.2238 | 1.0 |
| 3.2034 | 9.68 | 300 | 3.2002 | 0.9888 |
| 2.7986 | 12.9 | 400 | 1.2408 | 0.9210 |
| 1.3869 | 16.13 | 500 | 0.7973 | 0.7462 |
| 1.0228 | 19.35 | 600 | 0.6722 | 0.6788 |
| 0.8311 | 22.58 | 700 | 0.6100 | 0.6150 |
| 0.717 | 25.81 | 800 | 0.6236 | 0.6013 |
| 0.6264 | 29.03 | 900 | 0.6031 | 0.5575 |
| 0.5494 | 32.26 | 1000 | 0.5656 | 0.5309 |
| 0.4781 | 35.48 | 1100 | 0.5289 | 0.4996 |
| 0.4311 | 38.71 | 1200 | 0.5375 | 0.4768 |
| 0.3902 | 41.94 | 1300 | 0.5246 | 0.4703 |
| 0.3508 | 45.16 | 1400 | 0.5382 | 0.4696 |
| 0.3199 | 48.39 | 1500 | 0.5328 | 0.4596 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hsb']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hsb', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,450 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Wer: 0.4827
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8951 | 3.23 | 100 | 3.6396 | 1.0 |
| 3.314 | 6.45 | 200 | 3.2331 | 1.0 |
| 3.1931 | 9.68 | 300 | 3.0947 | 0.9906 |
| 1.7079 | 12.9 | 400 | 0.8865 | 0.8499 |
| 0.6859 | 16.13 | 500 | 0.7994 | 0.7529 |
| 0.4804 | 19.35 | 600 | 0.7783 | 0.7069 |
| 0.3506 | 22.58 | 700 | 0.6904 | 0.6321 |
| 0.2695 | 25.81 | 800 | 0.6519 | 0.5926 |
| 0.222 | 29.03 | 900 | 0.7041 | 0.5720 |
| 0.1828 | 32.26 | 1000 | 0.6608 | 0.5513 |
| 0.1474 | 35.48 | 1100 | 0.7129 | 0.5319 |
| 0.1269 | 38.71 | 1200 | 0.6664 | 0.5056 |
| 0.1077 | 41.94 | 1300 | 0.6712 | 0.4942 |
| 0.0934 | 45.16 | 1400 | 0.6467 | 0.4879 |
| 0.0819 | 48.39 | 1500 | 0.6549 | 0.4827 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM
|
DrishtiSharma
|
wav2vec2
| 19 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kk']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'kk', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,692 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7149
- Wer: 0.451
# Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Kazakh language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.6799 | 9.09 | 200 | 3.6119 | 1.0 |
| 3.1332 | 18.18 | 400 | 2.5352 | 1.005 |
| 1.0465 | 27.27 | 600 | 0.6169 | 0.682 |
| 0.3452 | 36.36 | 800 | 0.6572 | 0.607 |
| 0.2575 | 45.44 | 1000 | 0.6527 | 0.578 |
| 0.2088 | 54.53 | 1200 | 0.6828 | 0.551 |
| 0.158 | 63.62 | 1400 | 0.7074 | 0.5575 |
| 0.1309 | 72.71 | 1600 | 0.6523 | 0.5595 |
| 0.1074 | 81.8 | 1800 | 0.7262 | 0.5415 |
| 0.087 | 90.89 | 2000 | 0.7199 | 0.521 |
| 0.0711 | 99.98 | 2200 | 0.7113 | 0.523 |
| 0.0601 | 109.09 | 2400 | 0.6863 | 0.496 |
| 0.0451 | 118.18 | 2600 | 0.6998 | 0.483 |
| 0.0378 | 127.27 | 2800 | 0.6971 | 0.4615 |
| 0.0319 | 136.36 | 3000 | 0.7119 | 0.4475 |
| 0.0305 | 145.44 | 3200 | 0.7181 | 0.459 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Command
!python eval.py \
--model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 \
--dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
|
DrishtiSharma/wav2vec2-large-xls-r-300m-maltese
|
DrishtiSharma
|
wav2vec2
| 11 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mt']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'mt', 'robust-speech-event']
| false | true | true | 2,153 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Wer: 0.2781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1800
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0174 | 9.01 | 1000 | 3.0552 | 1.0 |
| 1.0446 | 18.02 | 2000 | 0.6708 | 0.7577 |
| 0.7995 | 27.03 | 3000 | 0.4202 | 0.4770 |
| 0.6978 | 36.04 | 4000 | 0.3054 | 0.3494 |
| 0.6189 | 45.05 | 5000 | 0.2878 | 0.3154 |
| 0.5667 | 54.05 | 6000 | 0.3114 | 0.3286 |
| 0.5173 | 63.06 | 7000 | 0.3085 | 0.3021 |
| 0.4682 | 72.07 | 8000 | 0.3058 | 0.2969 |
| 0.451 | 81.08 | 9000 | 0.3146 | 0.2907 |
| 0.4213 | 90.09 | 10000 | 0.3030 | 0.2881 |
| 0.4005 | 99.1 | 11000 | 0.3001 | 0.2789 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Script
!python eval.py \
--model_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese \
--dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs
|
DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mr']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mr', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 3,083 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8729
- Wer: 0.4942
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1
Note: Marathi language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.4934 | 9.09 | 200 | 3.7326 | 1.0 |
| 3.4234 | 18.18 | 400 | 3.3383 | 0.9996 |
| 3.2628 | 27.27 | 600 | 2.7482 | 0.9992 |
| 1.7743 | 36.36 | 800 | 0.6755 | 0.6787 |
| 1.0346 | 45.45 | 1000 | 0.6067 | 0.6193 |
| 0.8137 | 54.55 | 1200 | 0.6228 | 0.5612 |
| 0.6637 | 63.64 | 1400 | 0.5976 | 0.5495 |
| 0.5563 | 72.73 | 1600 | 0.7009 | 0.5383 |
| 0.4844 | 81.82 | 1800 | 0.6662 | 0.5287 |
| 0.4057 | 90.91 | 2000 | 0.6911 | 0.5303 |
| 0.3582 | 100.0 | 2200 | 0.7207 | 0.5327 |
| 0.3163 | 109.09 | 2400 | 0.7107 | 0.5118 |
| 0.2761 | 118.18 | 2600 | 0.7538 | 0.5118 |
| 0.2415 | 127.27 | 2800 | 0.7850 | 0.5178 |
| 0.2127 | 136.36 | 3000 | 0.8016 | 0.5034 |
| 0.1873 | 145.45 | 3200 | 0.8302 | 0.5187 |
| 0.1723 | 154.55 | 3400 | 0.9085 | 0.5223 |
| 0.1498 | 163.64 | 3600 | 0.8396 | 0.5126 |
| 0.1425 | 172.73 | 3800 | 0.8776 | 0.5094 |
| 0.1258 | 181.82 | 4000 | 0.8651 | 0.5014 |
| 0.117 | 190.91 | 4200 | 0.8772 | 0.4970 |
| 0.1093 | 200.0 | 4400 | 0.8729 | 0.4942 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
|
DrishtiSharma
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['myv']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'myv', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 6,299 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-myv-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
- Wer: 0.6160
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Erzya language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 19.453 | 1.92 | 50 | 16.4001 | 1.0 |
| 9.6875 | 3.85 | 100 | 5.4468 | 1.0 |
| 4.9988 | 5.77 | 150 | 4.3507 | 1.0 |
| 4.1148 | 7.69 | 200 | 3.6753 | 1.0 |
| 3.4922 | 9.62 | 250 | 3.3103 | 1.0 |
| 3.2443 | 11.54 | 300 | 3.1741 | 1.0 |
| 3.164 | 13.46 | 350 | 3.1346 | 1.0 |
| 3.0954 | 15.38 | 400 | 3.0428 | 1.0 |
| 3.0076 | 17.31 | 450 | 2.9137 | 1.0 |
| 2.6883 | 19.23 | 500 | 2.1476 | 0.9978 |
| 1.5124 | 21.15 | 550 | 0.8955 | 0.8225 |
| 0.8711 | 23.08 | 600 | 0.6948 | 0.7591 |
| 0.6695 | 25.0 | 650 | 0.6683 | 0.7636 |
| 0.5606 | 26.92 | 700 | 0.6821 | 0.7435 |
| 0.503 | 28.85 | 750 | 0.7220 | 0.7516 |
| 0.4528 | 30.77 | 800 | 0.6638 | 0.7324 |
| 0.4219 | 32.69 | 850 | 0.7120 | 0.7435 |
| 0.4109 | 34.62 | 900 | 0.7122 | 0.7511 |
| 0.3887 | 36.54 | 950 | 0.7179 | 0.7199 |
| 0.3895 | 38.46 | 1000 | 0.7322 | 0.7525 |
| 0.391 | 40.38 | 1050 | 0.6850 | 0.7364 |
| 0.3537 | 42.31 | 1100 | 0.7571 | 0.7279 |
| 0.3267 | 44.23 | 1150 | 0.7575 | 0.7257 |
| 0.3195 | 46.15 | 1200 | 0.7580 | 0.6998 |
| 0.2891 | 48.08 | 1250 | 0.7452 | 0.7101 |
| 0.294 | 50.0 | 1300 | 0.7316 | 0.6945 |
| 0.2854 | 51.92 | 1350 | 0.7241 | 0.6757 |
| 0.2801 | 53.85 | 1400 | 0.7532 | 0.6887 |
| 0.2502 | 55.77 | 1450 | 0.7587 | 0.6811 |
| 0.2427 | 57.69 | 1500 | 0.7231 | 0.6851 |
| 0.2311 | 59.62 | 1550 | 0.7288 | 0.6632 |
| 0.2176 | 61.54 | 1600 | 0.7711 | 0.6664 |
| 0.2117 | 63.46 | 1650 | 0.7914 | 0.6940 |
| 0.2114 | 65.38 | 1700 | 0.8065 | 0.6918 |
| 0.1913 | 67.31 | 1750 | 0.8372 | 0.6945 |
| 0.1897 | 69.23 | 1800 | 0.8051 | 0.6869 |
| 0.1865 | 71.15 | 1850 | 0.8076 | 0.6740 |
| 0.1844 | 73.08 | 1900 | 0.7935 | 0.6708 |
| 0.1757 | 75.0 | 1950 | 0.8015 | 0.6610 |
| 0.1636 | 76.92 | 2000 | 0.7614 | 0.6414 |
| 0.1637 | 78.85 | 2050 | 0.8123 | 0.6592 |
| 0.1599 | 80.77 | 2100 | 0.7907 | 0.6566 |
| 0.1498 | 82.69 | 2150 | 0.8641 | 0.6757 |
| 0.1545 | 84.62 | 2200 | 0.7438 | 0.6682 |
| 0.1433 | 86.54 | 2250 | 0.8014 | 0.6624 |
| 0.1427 | 88.46 | 2300 | 0.7758 | 0.6646 |
| 0.1423 | 90.38 | 2350 | 0.7741 | 0.6423 |
| 0.1298 | 92.31 | 2400 | 0.7938 | 0.6414 |
| 0.1111 | 94.23 | 2450 | 0.7976 | 0.6467 |
| 0.1243 | 96.15 | 2500 | 0.7916 | 0.6481 |
| 0.1215 | 98.08 | 2550 | 0.7594 | 0.6392 |
| 0.113 | 100.0 | 2600 | 0.8236 | 0.6392 |
| 0.1077 | 101.92 | 2650 | 0.7959 | 0.6347 |
| 0.0988 | 103.85 | 2700 | 0.8189 | 0.6392 |
| 0.0953 | 105.77 | 2750 | 0.8157 | 0.6414 |
| 0.0889 | 107.69 | 2800 | 0.7946 | 0.6369 |
| 0.0929 | 109.62 | 2850 | 0.8255 | 0.6360 |
| 0.0822 | 111.54 | 2900 | 0.8320 | 0.6334 |
| 0.086 | 113.46 | 2950 | 0.8539 | 0.6490 |
| 0.0825 | 115.38 | 3000 | 0.8438 | 0.6418 |
| 0.0727 | 117.31 | 3050 | 0.8568 | 0.6481 |
| 0.0717 | 119.23 | 3100 | 0.8447 | 0.6512 |
| 0.0815 | 121.15 | 3150 | 0.8470 | 0.6445 |
| 0.0689 | 123.08 | 3200 | 0.8264 | 0.6249 |
| 0.0726 | 125.0 | 3250 | 0.7981 | 0.6169 |
| 0.0648 | 126.92 | 3300 | 0.8237 | 0.6200 |
| 0.0632 | 128.85 | 3350 | 0.8416 | 0.6249 |
| 0.06 | 130.77 | 3400 | 0.8276 | 0.6173 |
| 0.0616 | 132.69 | 3450 | 0.8429 | 0.6209 |
| 0.0614 | 134.62 | 3500 | 0.8485 | 0.6271 |
| 0.0539 | 136.54 | 3550 | 0.8598 | 0.6218 |
| 0.0555 | 138.46 | 3600 | 0.8557 | 0.6169 |
| 0.0604 | 140.38 | 3650 | 0.8436 | 0.6186 |
| 0.0556 | 142.31 | 3700 | 0.8428 | 0.6178 |
| 0.051 | 144.23 | 3750 | 0.8440 | 0.6142 |
| 0.0526 | 146.15 | 3800 | 0.8566 | 0.6142 |
| 0.052 | 148.08 | 3850 | 0.8544 | 0.6178 |
| 0.0519 | 150.0 | 3900 | 0.8537 | 0.6160 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5
|
DrishtiSharma
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['or']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'or', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,621 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-d5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9571
- Wer: 0.5450
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.2958 | 12.5 | 300 | 4.9014 | 1.0 |
| 3.4065 | 25.0 | 600 | 3.5150 | 1.0 |
| 1.5402 | 37.5 | 900 | 0.8356 | 0.7249 |
| 0.6049 | 50.0 | 1200 | 0.7754 | 0.6349 |
| 0.4074 | 62.5 | 1500 | 0.7994 | 0.6217 |
| 0.3097 | 75.0 | 1800 | 0.8815 | 0.5985 |
| 0.2593 | 87.5 | 2100 | 0.8532 | 0.5754 |
| 0.2097 | 100.0 | 2400 | 0.9077 | 0.5648 |
| 0.1784 | 112.5 | 2700 | 0.9047 | 0.5668 |
| 0.1567 | 125.0 | 3000 | 0.9019 | 0.5728 |
| 0.1315 | 137.5 | 3300 | 0.9295 | 0.5827 |
| 0.1125 | 150.0 | 3600 | 0.9256 | 0.5681 |
| 0.1035 | 162.5 | 3900 | 0.9148 | 0.5496 |
| 0.0901 | 175.0 | 4200 | 0.9480 | 0.5483 |
| 0.0817 | 187.5 | 4500 | 0.9799 | 0.5516 |
| 0.079 | 200.0 | 4800 | 0.9571 | 0.5450 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['or']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'or', 'robust-speech-event']
| true | true | true | 4,444 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-dx12
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4638
- Wer: 0.5602
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Oriya language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 13.5059 | 4.17 | 100 | 10.3789 | 1.0 |
| 4.5964 | 8.33 | 200 | 4.3294 | 1.0 |
| 3.4448 | 12.5 | 300 | 3.7903 | 1.0 |
| 3.3683 | 16.67 | 400 | 3.5289 | 1.0 |
| 2.042 | 20.83 | 500 | 1.1531 | 0.7857 |
| 0.5721 | 25.0 | 600 | 1.0267 | 0.7646 |
| 0.3274 | 29.17 | 700 | 1.0773 | 0.6938 |
| 0.2466 | 33.33 | 800 | 1.0323 | 0.6647 |
| 0.2047 | 37.5 | 900 | 1.1255 | 0.6733 |
| 0.1847 | 41.67 | 1000 | 1.1194 | 0.6515 |
| 0.1453 | 45.83 | 1100 | 1.1215 | 0.6601 |
| 0.1367 | 50.0 | 1200 | 1.1898 | 0.6627 |
| 0.1334 | 54.17 | 1300 | 1.3082 | 0.6687 |
| 0.1041 | 58.33 | 1400 | 1.2514 | 0.6177 |
| 0.1024 | 62.5 | 1500 | 1.2055 | 0.6528 |
| 0.0919 | 66.67 | 1600 | 1.4125 | 0.6369 |
| 0.074 | 70.83 | 1700 | 1.4006 | 0.6634 |
| 0.0681 | 75.0 | 1800 | 1.3943 | 0.6131 |
| 0.0709 | 79.17 | 1900 | 1.3545 | 0.6296 |
| 0.064 | 83.33 | 2000 | 1.2437 | 0.6237 |
| 0.0552 | 87.5 | 2100 | 1.3762 | 0.6190 |
| 0.056 | 91.67 | 2200 | 1.3763 | 0.6323 |
| 0.0514 | 95.83 | 2300 | 1.2897 | 0.6164 |
| 0.0409 | 100.0 | 2400 | 1.4257 | 0.6104 |
| 0.0379 | 104.17 | 2500 | 1.4219 | 0.5853 |
| 0.0367 | 108.33 | 2600 | 1.4361 | 0.6032 |
| 0.0412 | 112.5 | 2700 | 1.4713 | 0.6098 |
| 0.0353 | 116.67 | 2800 | 1.4132 | 0.6369 |
| 0.0336 | 120.83 | 2900 | 1.5210 | 0.6098 |
| 0.0302 | 125.0 | 3000 | 1.4686 | 0.5939 |
| 0.0398 | 129.17 | 3100 | 1.5456 | 0.6204 |
| 0.0291 | 133.33 | 3200 | 1.4111 | 0.5827 |
| 0.0247 | 137.5 | 3300 | 1.3866 | 0.6151 |
| 0.0196 | 141.67 | 3400 | 1.4513 | 0.5880 |
| 0.0218 | 145.83 | 3500 | 1.5100 | 0.5899 |
| 0.0196 | 150.0 | 3600 | 1.4936 | 0.5999 |
| 0.0164 | 154.17 | 3700 | 1.5012 | 0.5701 |
| 0.0168 | 158.33 | 3800 | 1.5601 | 0.5919 |
| 0.0151 | 162.5 | 3900 | 1.4891 | 0.5761 |
| 0.0137 | 166.67 | 4000 | 1.4839 | 0.5800 |
| 0.0143 | 170.83 | 4100 | 1.4826 | 0.5754 |
| 0.0114 | 175.0 | 4200 | 1.4950 | 0.5708 |
| 0.0092 | 179.17 | 4300 | 1.5008 | 0.5694 |
| 0.0104 | 183.33 | 4400 | 1.4774 | 0.5728 |
| 0.0096 | 187.5 | 4500 | 1.4948 | 0.5767 |
| 0.0105 | 191.67 | 4600 | 1.4557 | 0.5694 |
| 0.009 | 195.83 | 4700 | 1.4615 | 0.5628 |
| 0.0081 | 200.0 | 4800 | 1.4638 | 0.5602 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1
|
DrishtiSharma
|
wav2vec2
| 18 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pa-IN']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'pa-IN', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 2,071 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0855
- Wer: 0.4755
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Punjabi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4607 | 9.26 | 500 | 2.7746 | 1.0416 |
| 0.3442 | 18.52 | 1000 | 0.9114 | 0.5911 |
| 0.2213 | 27.78 | 1500 | 0.9687 | 0.5751 |
| 0.1242 | 37.04 | 2000 | 1.0204 | 0.5461 |
| 0.0998 | 46.3 | 2500 | 1.0250 | 0.5233 |
| 0.0727 | 55.56 | 3000 | 1.1072 | 0.5382 |
| 0.0605 | 64.81 | 3500 | 1.0588 | 0.5073 |
| 0.0458 | 74.07 | 4000 | 1.0818 | 0.5069 |
| 0.0338 | 83.33 | 4500 | 1.0948 | 0.5108 |
| 0.0223 | 92.59 | 5000 | 1.0986 | 0.4775 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sat']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sat', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 1,923 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-a3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8961
- Wer: 0.3976
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1266 | 33.29 | 100 | 2.8577 | 1.0 |
| 2.1549 | 66.57 | 200 | 1.0799 | 0.5542 |
| 0.5628 | 99.86 | 300 | 0.7973 | 0.4016 |
| 0.0779 | 133.29 | 400 | 0.8424 | 0.4177 |
| 0.0404 | 166.57 | 500 | 0.9048 | 0.4137 |
| 0.0212 | 199.86 | 600 | 0.8961 | 0.3976 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final
|
DrishtiSharma
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sat']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sat', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,133 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8012
- Wer: 0.3815
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev_data --config sat --split validation --chunk_length_s 10 --stride_length_s 1
**Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data**
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 170
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 10.6317 | 33.29 | 100 | 2.8629 | 1.0 |
| 2.047 | 66.57 | 200 | 0.9516 | 0.5703 |
| 0.4475 | 99.86 | 300 | 0.8539 | 0.3896 |
| 0.0716 | 133.29 | 400 | 0.8277 | 0.3454 |
| 0.047 | 166.57 | 500 | 0.7597 | 0.3655 |
| 0.0249 | 199.86 | 600 | 0.8012 | 0.3815 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1
|
DrishtiSharma
|
wav2vec2
| 19 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sl']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sl']
| true | true | true | 2,554 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3881 | 6.1 | 500 | 2.9710 | 1.0 |
| 2.6401 | 12.2 | 1000 | 1.7677 | 0.9734 |
| 1.5152 | 18.29 | 1500 | 0.5564 | 0.6011 |
| 1.2191 | 24.39 | 2000 | 0.4319 | 0.4390 |
| 1.0237 | 30.49 | 2500 | 0.3141 | 0.3175 |
| 0.8892 | 36.59 | 3000 | 0.2748 | 0.2689 |
| 0.8296 | 42.68 | 3500 | 0.2680 | 0.2534 |
| 0.7602 | 48.78 | 4000 | 0.2820 | 0.2506 |
| 0.7186 | 54.88 | 4500 | 0.2672 | 0.2398 |
| 0.6887 | 60.98 | 5000 | 0.2729 | 0.2402 |
| 0.6507 | 67.07 | 5500 | 0.2767 | 0.2361 |
| 0.6226 | 73.17 | 6000 | 0.2817 | 0.2332 |
| 0.6024 | 79.27 | 6500 | 0.2679 | 0.2279 |
| 0.5787 | 85.37 | 7000 | 0.2837 | 0.2316 |
| 0.5744 | 91.46 | 7500 | 0.2838 | 0.2284 |
| 0.5556 | 97.56 | 8000 | 0.2763 | 0.2281 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2
|
DrishtiSharma
|
wav2vec2
| 19 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sl']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sl']
| true | true | true | 2,552 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2401
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9294 | 6.1 | 500 | 2.9712 | 1.0 |
| 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 |
| 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 |
| 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 |
| 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 |
| 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 |
| 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 |
| 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 |
| 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 |
| 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 |
| 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 |
| 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 |
| 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 |
| 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 |
| 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 |
| 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.