repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AdapterHub/roberta-base-pf-rte
|
AdapterHub
|
roberta
| 6 | 7 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapterhub:nli/rte', 'adapter-transformers']
| false | true | true | 2,106 |
# Adapter `AdapterHub/roberta-base-pf-rte` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rte", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-scicite
|
AdapterHub
|
roberta
| 6 | 2 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['scicite']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapter-transformers']
| false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scicite", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-scitail
|
AdapterHub
|
roberta
| 6 | 2 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['scitail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapterhub:nli/scitail', 'adapter-transformers']
| false | true | true | 2,122 |
# Adapter `AdapterHub/roberta-base-pf-scitail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scitail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-sick
|
AdapterHub
|
roberta
| 6 | 8 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['sick']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapter-transformers', 'adapterhub:nli/sick', 'text-classification']
| false | true | true | 2,110 |
# Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sick", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-snli
|
AdapterHub
|
roberta
| 6 | 9 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['snli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapter-transformers']
| false | true | true | 2,104 |
# Adapter `AdapterHub/roberta-base-pf-snli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-snli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-social_i_qa
|
AdapterHub
|
roberta
| 6 | 1 |
adapter-transformers
| 0 | null | false | false | false | null |
['en']
|
['social_i_qa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'adapter-transformers']
| false | true | true | 2,069 |
# Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-social_i_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
```
|
AdapterHub/roberta-base-pf-squad
|
AdapterHub
|
roberta
| 6 | 23 |
adapter-transformers
| 1 |
question-answering
| false | false | false | null |
['en']
|
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'roberta', 'adapterhub:qa/squad1', 'adapter-transformers']
| false | true | true | 2,118 |
# Adapter `AdapterHub/roberta-base-pf-squad` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-squad_v2
|
AdapterHub
|
roberta
| 6 | 27 |
adapter-transformers
| 0 |
question-answering
| false | false | false | null |
['en']
|
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'roberta', 'adapterhub:qa/squad2', 'adapter-transformers']
| false | true | true | 2,124 |
# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad_v2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-sst2
|
AdapterHub
|
roberta
| 6 | 8 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapterhub:sentiment/sst-2', 'adapter-transformers']
| false | true | true | 2,124 |
# Adapter `AdapterHub/roberta-base-pf-sst2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sst2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-stsb
|
AdapterHub
|
roberta
| 6 | 5 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapterhub:sts/sts-b', 'adapter-transformers']
| false | true | true | 2,112 |
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-swag
|
AdapterHub
|
roberta
| 6 | 0 |
adapter-transformers
| 0 | null | false | false | false | null |
['en']
|
['swag']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'adapter-transformers']
| false | true | true | 2,041 |
# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-swag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
```
|
AdapterHub/roberta-base-pf-trec
|
AdapterHub
|
roberta
| 6 | 7 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['trec']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapter-transformers']
| false | true | true | 2,104 |
# Adapter `AdapterHub/roberta-base-pf-trec` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-trec", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-ud_deprel
|
AdapterHub
|
roberta
| 6 | 2 |
adapter-transformers
| 0 |
token-classification
| false | false | false | null |
['en']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['token-classification', 'roberta', 'adapterhub:deprel/ud_ewt', 'adapter-transformers']
| false | true | true | 2,123 |
# Adapter `AdapterHub/roberta-base-pf-ud_deprel` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_deprel", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-ud_en_ewt
|
AdapterHub
|
roberta
| 6 | 2 |
adapter-transformers
| 0 | null | false | false | false | null |
['en']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'adapterhub:dp/ud_ewt', 'adapter-transformers']
| false | true | true | 1,455 |
# Adapter `AdapterHub/roberta-base-pf-ud_en_ewt` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_en_ewt", source="hf", set_active=True)
```
## Architecture & Training
This adapter was trained using adapter-transformer's example script for dependency parsing.
See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing.
## Evaluation results
Scores achieved by dependency parsing adapters on the test set of UD English EWT after training:
| Model | UAS | LAS |
| --- | --- | --- |
| `bert-base-uncased` | 91.74 | 89.15 |
| `roberta-base` | 91.43 | 88.43 |
## Citation
<!-- Add some description here -->
|
AdapterHub/roberta-base-pf-ud_pos
|
AdapterHub
|
roberta
| 6 | 13 |
adapter-transformers
| 0 |
token-classification
| false | false | false | null |
['en']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['token-classification', 'roberta', 'adapterhub:pos/ud_ewt', 'adapter-transformers']
| false | true | true | 2,111 |
# Adapter `AdapterHub/roberta-base-pf-ud_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-wic
|
AdapterHub
|
roberta
| 6 | 2 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapterhub:wordsence/wic', 'adapter-transformers']
| false | true | true | 2,118 |
# Adapter `AdapterHub/roberta-base-pf-wic` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wic", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-wikihop
|
AdapterHub
|
roberta
| 6 | 5 |
adapter-transformers
| 0 |
question-answering
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'roberta', 'adapterhub:qa/wikihop', 'adapter-transformers']
| false | true | true | 2,124 |
# Adapter `AdapterHub/roberta-base-pf-wikihop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wikihop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-winogrande
|
AdapterHub
|
roberta
| 6 | 0 |
adapter-transformers
| 0 | null | false | false | false | null |
['en']
|
['winogrande']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'adapterhub:comsense/winogrande', 'adapter-transformers']
| false | true | true | 2,081 |
# Adapter `AdapterHub/roberta-base-pf-winogrande` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-winogrande", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
```
|
AdapterHub/roberta-base-pf-wnut_17
|
AdapterHub
|
roberta
| 6 | 4 |
adapter-transformers
| 0 |
token-classification
| false | false | false | null |
['en']
|
['wnut_17']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['token-classification', 'roberta', 'adapter-transformers']
| false | true | true | 2,109 |
# Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wnut_17", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-yelp_polarity
|
AdapterHub
|
roberta
| 6 | 5 |
adapter-transformers
| 0 |
text-classification
| false | false | false | null |
['en']
|
['yelp_polarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'roberta', 'adapter-transformers']
| false | true | true | 2,140 |
# Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-yelp_polarity", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
Adi2K/Priv-Consent
|
Adi2K
|
bert
| 9 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null |
['eng']
|
['Adi2K/autonlp-data-Priv-Consent']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 936 |
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Adil617/wav2vec2-base-timit-demo-colab
|
Adil617
|
wav2vec2
| 14 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,237 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9314
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.686 | 0.16 | 20 | 13.6565 | 1.0 |
| 8.0711 | 0.32 | 40 | 12.5379 | 1.0 |
| 6.9967 | 0.48 | 60 | 9.7215 | 1.0 |
| 5.2368 | 0.64 | 80 | 5.8459 | 1.0 |
| 3.4499 | 0.8 | 100 | 3.3413 | 1.0 |
| 3.1261 | 0.96 | 120 | 3.2858 | 1.0 |
| 3.0654 | 1.12 | 140 | 3.1945 | 1.0 |
| 3.0421 | 1.28 | 160 | 3.1296 | 1.0 |
| 3.0035 | 1.44 | 180 | 3.1172 | 1.0 |
| 3.0067 | 1.6 | 200 | 3.1217 | 1.0 |
| 2.9867 | 1.76 | 220 | 3.0715 | 1.0 |
| 2.9653 | 1.92 | 240 | 3.0747 | 1.0 |
| 2.9629 | 2.08 | 260 | 2.9984 | 1.0 |
| 2.9462 | 2.24 | 280 | 2.9991 | 1.0 |
| 2.9391 | 2.4 | 300 | 3.0391 | 1.0 |
| 2.934 | 2.56 | 320 | 2.9682 | 1.0 |
| 2.9193 | 2.72 | 340 | 2.9701 | 1.0 |
| 2.8985 | 2.88 | 360 | 2.9314 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Aero/Tsubomi-Haruno
|
Aero
|
gpt2
| 9 | 3 |
transformers
| 0 |
conversational
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 1,252 |
# DialoGPT Trained on the Speech of a Game Character
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Tsubomi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
Aftabhussain/Tomato_Leaf_Classifier
|
Aftabhussain
|
vit
| 8 | 6 |
transformers
| 0 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 468 |
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bacterial_spot

#### Healthy

|
Ahmad/parsT5-base
|
Ahmad
|
t5
| 7 | 180 |
transformers
| 3 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 967 |
A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
```
from transformers import T5ForConditionalGeneration,AutoTokenizer
import torch
model_name = "Ahmad/parsT5-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('دانش آموزان به <extra_id_0> میروند و <extra_id_1> میخوانند.', return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(input_ids)
for h in hypotheses:
print(tokenizer.decode(h))
```
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
https://github.com/puraminy/parsT5
|
Ahmad/parsT5
|
Ahmad
|
t5
| 10 | 2 |
transformers
| 1 |
text2text-generation
| false | false | true | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 256 |
A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training.
For more information and getting the training code please refer to:
https://github.com/puraminy/parsT5
|
AhmedBou/TuniBert
|
AhmedBou
|
bert
| 7 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentiment analysis', 'classification', 'arabic dialect', 'tunisian dialect']
| false | true | true | 511 |
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor.
If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [huggingface.co/AhmedBou][github.com/BoulahiaAhmed]
|
AhmedSSoliman/MarianCG-CoNaLa
|
AhmedSSoliman
|
marian
| 15 | 13 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,821 |
```
```
[](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implemetation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
CoNaLa Dataset for Code Generation is available at
https://huggingface.co/datasets/AhmedSSoliman/CoNaLa
This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-CoNaLa
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
# Input (Natural Language) and Output (Python Code)
NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-CoNaLa
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
|
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt
|
AigizK
|
wav2vec2
| 16 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ba']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
| true | true | true | 1,748 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bashkir-cv7_opt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Training Loss: 0.268400
- Validation Loss: 0.088252
- WER without LM: 0.085588
- WER with LM: 0.04440795062008041
- CER with LM: 0.010491234992390509
## Model description
Trained with this [jupiter notebook](https://drive.google.com/file/d/1KohDXZtKBWXVPZYlsLtqfxJGBzKmTtSh/view?usp=sharing)
## Intended uses & limitations
In order to reduce the number of characters, the following letters have been replaced or removed:
- 'я' -> 'йа'
- 'ю' -> 'йу'
- 'ё' -> 'йо'
- 'е' -> 'йэ' for first letter
- 'е' -> 'э' for other cases
- 'ъ' -> deleted
- 'ь' -> deleted
Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu113
- Datasets 1.18.2
- Tokenizers 0.10.3
|
AimB/mT5-en-kr-natural
|
AimB
|
mt5
| 31 | 834 |
transformers
| 2 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 271 |
you can use this model with simpletransfomers.
```
!pip install simpletransformers
from simpletransformers.t5 import T5Model
model = T5Model("mt5", "AimB/mT5-en-kr-natural")
print(model.predict(["I feel good today"]))
print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"]))
```
|
Aimendo/autonlp-triage-35248482
|
Aimendo
|
bert
| 9 | 1 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['Aimendo/autonlp-data-triage']
|
7.989144645413398
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
autonlp
| false | true | true | 1,205 |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
- Macro Precision: 0.9380372699332605
- Micro Precision: 0.9728654124457308
- Weighted Precision: 0.974548513256663
- Macro Recall: 0.9689346153591594
- Micro Recall: 0.9728654124457308
- Weighted Recall: 0.9728654124457308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Aimendo/autonlp-triage-35248482
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Ajay191191/autonlp-Test-530014983
|
Ajay191191
|
bert
| 9 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['Ajay191191/autonlp-data-Test']
|
55.10196329868386
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
autonlp
| false | true | true | 998 |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Ajaykannan6/autonlp-manthan-16122692
|
Ajaykannan6
|
bart
| 10 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false | null |
['unk']
|
['Ajaykannan6/autonlp-data-manthan']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
autonlp
| false | true | true | 497 |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Ajaykannan6/autonlp-manthan-16122692
```
|
Akari/albert-base-v2-finetuned-squad
|
Akari
|
albert
| 51 | 11 |
transformers
| 1 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad_v2']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,254 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Akash7897/bert-base-cased-wikitext2
|
Akash7897
|
bert
| 9 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,249 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0915 | 1.0 | 2346 | 7.0517 |
| 6.905 | 2.0 | 4692 | 6.8735 |
| 6.8565 | 3.0 | 7038 | 6.8924 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Akash7897/distilbert-base-uncased-finetuned-cola
|
Akash7897
|
distilbert
| 18 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,572 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0789
- Matthews Correlation: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 |
| 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 |
| 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 |
| 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 |
| 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Akash7897/distilbert-base-uncased-finetuned-sst2
|
Akash7897
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,228 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1793 | 1.0 | 4210 | 0.3010 | 0.9037 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Akash7897/gpt2-wikitext2
|
Akash7897
|
gpt2
| 14 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,216 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.558 | 1.0 | 2249 | 6.4672 |
| 6.1918 | 2.0 | 4498 | 6.1970 |
| 6.0019 | 3.0 | 6747 | 6.1079 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Akashpb13/Central_kurdish_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 6 |
transformers
| 2 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ckb']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 2 | 2 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ckb', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 3,678 |
# Akashpb13/Central_kurdish_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.348580
- Wer: 0.401147
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Central Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 5.097800 | 2.190326 | 1.001207 |
| 1000 | 0.797500 | 0.331392 | 0.576819 |
| 1500 | 0.405100 | 0.262009 | 0.549049 |
| 2000 | 0.322100 | 0.248178 | 0.479626 |
| 2500 | 0.264600 | 0.258866 | 0.488983 |
| 3000 | 0.228300 | 0.261523 | 0.469665 |
| 3500 | 0.201000 | 0.270135 | 0.451856 |
| 4000 | 0.180900 | 0.279302 | 0.448536 |
| 4500 | 0.163800 | 0.280921 | 0.459704 |
| 5000 | 0.147300 | 0.319249 | 0.471778 |
| 5500 | 0.137600 | 0.289546 | 0.449140 |
| 6000 | 0.132000 | 0.311350 | 0.458195 |
| 6500 | 0.117100 | 0.316726 | 0.432840 |
| 7000 | 0.109200 | 0.302210 | 0.439481 |
| 7500 | 0.104900 | 0.325913 | 0.439481 |
| 8000 | 0.097500 | 0.329446 | 0.431935 |
| 8500 | 0.088600 | 0.345259 | 0.425898 |
| 9000 | 0.084900 | 0.342891 | 0.428313 |
| 9500 | 0.080900 | 0.353081 | 0.424389 |
| 10000 | 0.075600 | 0.347063 | 0.424992 |
| 10500 | 0.072800 | 0.330086 | 0.424691 |
| 11000 | 0.068100 | 0.350658 | 0.421974 |
| 11500 | 0.064700 | 0.342949 | 0.413522 |
| 12000 | 0.061500 | 0.341704 | 0.415334 |
| 12500 | 0.059500 | 0.346279 | 0.411410 |
| 13000 | 0.057400 | 0.349901 | 0.407184 |
| 13500 | 0.056400 | 0.347733 | 0.402656 |
| 14000 | 0.053300 | 0.344899 | 0.405976 |
| 14500 | 0.052900 | 0.346708 | 0.402656 |
| 15000 | 0.050600 | 0.344118 | 0.400845 |
| 15500 | 0.050200 | 0.348396 | 0.402958 |
| 16000 | 0.049800 | 0.348312 | 0.401751 |
| 16500 | 0.051900 | 0.348372 | 0.401147 |
| 17000 | 0.049800 | 0.348580 | 0.401147 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Central_kurdish_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ckb --split test
```
|
Akashpb13/Galician_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 27 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['gl']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gl', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,632 |
# Akashpb13/Galician_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.137096
- Wer: 0.196230
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Galician train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.038100 | 3.035432 | 1.000000 |
| 1000 | 2.180000 | 0.406300 | 0.557964 |
| 1500 | 0.331700 | 0.153797 | 0.262394 |
| 2000 | 0.171600 | 0.145268 | 0.235627 |
| 2500 | 0.125900 | 0.136622 | 0.228087 |
| 3000 | 0.105400 | 0.131650 | 0.224128 |
| 3500 | 0.087600 | 0.141032 | 0.217531 |
| 4000 | 0.078300 | 0.143675 | 0.214515 |
| 4500 | 0.070000 | 0.144607 | 0.208106 |
| 5000 | 0.061500 | 0.135259 | 0.202828 |
| 5500 | 0.055600 | 0.130638 | 0.203959 |
| 6000 | 0.050500 | 0.137416 | 0.202451 |
| 6500 | 0.046600 | 0.140379 | 0.200000 |
| 7000 | 0.040800 | 0.140179 | 0.200377 |
| 7500 | 0.041000 | 0.138089 | 0.196795 |
| 8000 | 0.038400 | 0.136927 | 0.197172 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Galician_xlsr --dataset mozilla-foundation/common_voice_8_0 --config gl --split test
```
|
Akashpb13/Hausa_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 8 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ha']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'ha', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 2,353 |
# Akashpb13/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
|
Akashpb13/Kabyle_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kab']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sw', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 6,020 |
# Akashpb13/Kabyle_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kabyle train.tsv. Only 50,000 records were sampled randomly and trained due to huge size of dataset.
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 4
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 7.199800 | 3.130564 | 1.000000 |
| 1000 | 1.570200 | 0.718097 | 0.734682 |
| 1500 | 0.850800 | 0.524227 | 0.640532 |
| 2000 | 0.712200 | 0.468694 | 0.603454 |
| 2500 | 0.651200 | 0.413833 | 0.573025 |
| 3000 | 0.603100 | 0.403680 | 0.552847 |
| 3500 | 0.553300 | 0.372638 | 0.541719 |
| 4000 | 0.537200 | 0.353759 | 0.531191 |
| 4500 | 0.506300 | 0.359109 | 0.519601 |
| 5000 | 0.479600 | 0.343937 | 0.511336 |
| 5500 | 0.479800 | 0.338214 | 0.503948 |
| 6000 | 0.449500 | 0.332600 | 0.495221 |
| 6500 | 0.439200 | 0.323905 | 0.492635 |
| 7000 | 0.434900 | 0.310417 | 0.484555 |
| 7500 | 0.403200 | 0.311247 | 0.483262 |
| 8000 | 0.401500 | 0.295637 | 0.476566 |
| 8500 | 0.397000 | 0.301321 | 0.471672 |
| 9000 | 0.371600 | 0.295639 | 0.468440 |
| 9500 | 0.370700 | 0.294039 | 0.468902 |
| 10000 | 0.364900 | 0.291195 | 0.468440 |
| 10500 | 0.348300 | 0.284898 | 0.461098 |
| 11000 | 0.350100 | 0.281764 | 0.459805 |
| 11500 | 0.336900 | 0.291022 | 0.461606 |
| 12000 | 0.330700 | 0.280467 | 0.455234 |
| 12500 | 0.322500 | 0.271714 | 0.452694 |
| 13000 | 0.307400 | 0.289519 | 0.455465 |
| 13500 | 0.309300 | 0.281922 | 0.451217 |
| 14000 | 0.304800 | 0.271514 | 0.452186 |
| 14500 | 0.288100 | 0.286801 | 0.446830 |
| 15000 | 0.293200 | 0.276309 | 0.445399 |
| 15500 | 0.289800 | 0.287188 | 0.446230 |
| 16000 | 0.274800 | 0.286406 | 0.441243 |
| 16500 | 0.271700 | 0.284754 | 0.441520 |
| 17000 | 0.262500 | 0.275431 | 0.442167 |
| 17500 | 0.255500 | 0.276575 | 0.439858 |
| 18000 | 0.260200 | 0.269911 | 0.435425 |
| 18500 | 0.250600 | 0.270519 | 0.434686 |
| 19000 | 0.243300 | 0.267655 | 0.437826 |
| 19500 | 0.240600 | 0.277109 | 0.431731 |
| 20000 | 0.237200 | 0.266622 | 0.433994 |
| 20500 | 0.231300 | 0.273015 | 0.428868 |
| 21000 | 0.227200 | 0.263024 | 0.430161 |
| 21500 | 0.220400 | 0.272880 | 0.429607 |
| 22000 | 0.218600 | 0.272340 | 0.426883 |
| 22500 | 0.213100 | 0.277066 | 0.428407 |
| 23000 | 0.205000 | 0.278404 | 0.424020 |
| 23500 | 0.200900 | 0.270877 | 0.418987 |
| 24000 | 0.199000 | 0.289120 | 0.425821 |
| 24500 | 0.196100 | 0.275831 | 0.424066 |
| 25000 | 0.191100 | 0.282822 | 0.421850 |
| 25500 | 0.190100 | 0.275820 | 0.418248 |
| 26000 | 0.178800 | 0.279208 | 0.419125 |
| 26500 | 0.183100 | 0.271464 | 0.419218 |
| 27000 | 0.177400 | 0.280869 | 0.419680 |
| 27500 | 0.171800 | 0.279593 | 0.414924 |
| 28000 | 0.172900 | 0.276949 | 0.417648 |
| 28500 | 0.164900 | 0.283491 | 0.417786 |
| 29000 | 0.164800 | 0.283122 | 0.416078 |
| 29500 | 0.165500 | 0.281969 | 0.415801 |
| 30000 | 0.163800 | 0.283319 | 0.412753 |
| 30500 | 0.153500 | 0.285702 | 0.414046 |
| 31000 | 0.156500 | 0.285041 | 0.412615 |
| 31500 | 0.150900 | 0.284336 | 0.413723 |
| 32000 | 0.151800 | 0.285922 | 0.412292 |
| 32500 | 0.149200 | 0.289461 | 0.412153 |
| 33000 | 0.145400 | 0.291322 | 0.409567 |
| 33500 | 0.145600 | 0.294361 | 0.409614 |
| 34000 | 0.144200 | 0.290686 | 0.409059 |
| 34500 | 0.143400 | 0.289474 | 0.409844 |
| 35000 | 0.143500 | 0.290340 | 0.408367 |
| 35500 | 0.143200 | 0.289581 | 0.407351 |
| 36000 | 0.138400 | 0.292782 | 0.408736 |
| 36500 | 0.137900 | 0.289108 | 0.408044 |
| 37000 | 0.138200 | 0.292127 | 0.407166 |
| 37500 | 0.134600 | 0.291797 | 0.408413 |
| 38000 | 0.139800 | 0.290056 | 0.408090 |
| 38500 | 0.136500 | 0.291198 | 0.408090 |
| 39000 | 0.137700 | 0.289696 | 0.408044 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Kabyle_xlsr --dataset mozilla-foundation/common_voice_8_0 --config kab --split test
```
|
Akashpb13/Swahili_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 10 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sw']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sw']
| true | true | true | 2,490 |
# Akashpb13/Swahili_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv and dev.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.810000 | 2.168847 | 0.995747 |
| 1000 | 0.564200 | 0.209411 | 0.303485 |
| 1500 | 0.217700 | 0.153959 | 0.239534 |
| 2000 | 0.150700 | 0.139901 | 0.216327 |
| 2500 | 0.119400 | 0.137543 | 0.208828 |
| 3000 | 0.099500 | 0.140921 | 0.203045 |
| 3500 | 0.087100 | 0.138835 | 0.199649 |
| 4000 | 0.074600 | 0.141297 | 0.195844 |
| 4500 | 0.066600 | 0.148560 | 0.194127 |
| 5000 | 0.060400 | 0.151214 | 0.194388 |
| 5500 | 0.054400 | 0.156072 | 0.192187 |
| 6000 | 0.051100 | 0.154726 | 0.190322 |
| 6500 | 0.048200 | 0.159847 | 0.189538 |
| 7000 | 0.046400 | 0.158727 | 0.188307 |
| 7500 | 0.046500 | 0.159032 | 0.187934 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Swahili_xlsr --dataset mozilla-foundation/common_voice_8_0 --config sw --split test
```
|
Akashpb13/xlsr_hungarian_new
|
Akashpb13
|
wav2vec2
| 12 | 4 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hu']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'hu', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 2,063 |
# Akashpb13/xlsr_hungarian_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.197464
- Wer: 0.330094
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice hungarian train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.785300 | 0.952295 | 0.796236 |
| 1000 | 0.535800 | 0.217474 | 0.381613 |
| 1500 | 0.258400 | 0.205524 | 0.345056 |
| 2000 | 0.202800 | 0.198680 | 0.336264 |
| 2500 | 0.182700 | 0.197464 | 0.330094 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_8_0 --config hu --split test
```
|
Akashpb13/xlsr_kurmanji_kurdish
|
Akashpb13
|
wav2vec2
| 12 | 3 |
transformers
| 4 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kmr', 'ku']
|
['mozilla-foundation/common_voice_8_0']
| null | 3 | 2 | 1 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'kmr', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,118 |
# Akashpb13/xlsr_kurmanji_kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.292389
- Wer: 0.388585
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kurmanji Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.382500 | 3.183725 | 1.000000 |
| 400 | 2.870200 | 0.996664 | 0.781117 |
| 600 | 0.609900 | 0.333755 | 0.445052 |
| 800 | 0.326800 | 0.305729 | 0.403157 |
| 1000 | 0.255000 | 0.290734 | 0.391621 |
| 1200 | 0.226300 | 0.292389 | 0.388585 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_kurmanji_kurdish --dataset mozilla-foundation/common_voice_8_0 --config kmr --split test
```
|
Akashpb13/xlsr_maltese_wav2vec2
|
Akashpb13
|
wav2vec2
| 9 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['mt']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 2,059 |
# Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "Akashpb13/xlsr_maltese_wav2vec2"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 29.42 %
|
AkshatSurolia/BEiT-FaceMask-Finetuned
|
AkshatSurolia
|
beit
| 10 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['Face-Mask18K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 2,495 |
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Training Metrics
epoch = 0.55
total_flos = 576468516GF
train_loss = 0.151
train_runtime = 0:58:16.56
train_samples_per_second = 16.505
train_steps_per_second = 1.032
---
## Evaluation Metrics
epoch = 0.55
eval_accuracy = 0.975
eval_loss = 0.0803
eval_runtime = 0:03:13.02
eval_samples_per_second = 18.629
eval_steps_per_second = 2.331
|
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
|
AkshatSurolia
|
convnext
| 10 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['Face-Mask18K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 840 |
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = 3.54
total_flos = 1195651761GF
train_loss = 0.0079
train_runtime = 1:08:20.25
train_samples_per_second = 14.075
train_steps_per_second = 0.22
---
## Evaluation Metrics
epoch = 3.54
eval_accuracy = 0.9961
eval_loss = 0.0151
eval_runtime = 0:01:23.47
eval_samples_per_second = 43.079
eval_steps_per_second = 5.391
|
AkshatSurolia/DeiT-FaceMask-Finetuned
|
AkshatSurolia
|
deit
| 11 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['Face-Mask18K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 1,449 |
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Training Metrics
epoch = 2.0
total_flos = 2078245655GF
train_loss = 0.0438
train_runtime = 1:37:16.87
train_samples_per_second = 9.887
train_steps_per_second = 0.309
---
## Evaluation Metrics
epoch = 2.0
eval_accuracy = 0.9922
eval_loss = 0.0271
eval_runtime = 0:03:17.36
eval_samples_per_second = 18.22
eval_steps_per_second = 2.28
|
AkshatSurolia/ICD-10-Code-Prediction
|
AkshatSurolia
|
bert
| 7 | 466 |
transformers
| 6 |
text-classification
| true | false | false |
apache-2.0
| null |
['Mimic III']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification']
| false | true | true | 1,106 |
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results]
|
AkshatSurolia/ViT-FaceMask-Finetuned
|
AkshatSurolia
|
vit
| 10 | 5 |
transformers
| 1 |
image-classification
| true | false | false |
apache-2.0
| null |
['Face-Mask18K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 2,423 |
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Training Metrics
epoch = 0.89
total_flos = 923776502GF
train_loss = 0.057
train_runtime = 0:40:10.40
train_samples_per_second = 23.943
train_steps_per_second = 1.497
---
## Evaluation Metrics
epoch = 0.89
eval_accuracy = 0.9894
eval_loss = 0.0395
eval_runtime = 0:00:36.81
eval_samples_per_second = 97.685
eval_steps_per_second = 12.224
|
AkshaySg/LanguageIdentification
|
AkshaySg
| null | 7 | 1 | null | 0 | null | false | false | false |
apache-2.0
|
['multilingual']
|
['VoxLingua107']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['LID', 'spoken language recognition']
| false | true | true | 278 |
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
|
AkshaySg/langid
|
AkshaySg
| null | 7 | 0 |
speechbrain
| 1 |
audio-classification
| true | false | false |
apache-2.0
|
['multilingual']
|
['VoxLingua107']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
| false | true | true | 5,487 |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
Aleksandar/bert-srb-base-cased-oscar
|
Aleksandar
|
bert
| 10 | 3 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 883 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/bert-srb-ner-setimes
|
Aleksandar
|
bert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,014 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1955
- Precision: 0.8229
- Recall: 0.8465
- F1: 0.8345
- Accuracy: 0.9645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2281 | 0.6589 | 0.7001 | 0.6789 | 0.9350 |
| No log | 2.0 | 208 | 0.1833 | 0.7105 | 0.7694 | 0.7388 | 0.9470 |
| No log | 3.0 | 312 | 0.1573 | 0.7461 | 0.7778 | 0.7616 | 0.9525 |
| No log | 4.0 | 416 | 0.1489 | 0.7665 | 0.8091 | 0.7872 | 0.9557 |
| 0.1898 | 5.0 | 520 | 0.1445 | 0.7881 | 0.8327 | 0.8098 | 0.9587 |
| 0.1898 | 6.0 | 624 | 0.1473 | 0.7913 | 0.8316 | 0.8109 | 0.9601 |
| 0.1898 | 7.0 | 728 | 0.1558 | 0.8101 | 0.8347 | 0.8222 | 0.9620 |
| 0.1898 | 8.0 | 832 | 0.1616 | 0.8026 | 0.8302 | 0.8162 | 0.9612 |
| 0.1898 | 9.0 | 936 | 0.1716 | 0.8127 | 0.8409 | 0.8266 | 0.9631 |
| 0.0393 | 10.0 | 1040 | 0.1751 | 0.8140 | 0.8369 | 0.8253 | 0.9628 |
| 0.0393 | 11.0 | 1144 | 0.1775 | 0.8096 | 0.8420 | 0.8255 | 0.9626 |
| 0.0393 | 12.0 | 1248 | 0.1763 | 0.8161 | 0.8386 | 0.8272 | 0.9636 |
| 0.0393 | 13.0 | 1352 | 0.1949 | 0.8259 | 0.8400 | 0.8329 | 0.9634 |
| 0.0393 | 14.0 | 1456 | 0.1842 | 0.8205 | 0.8420 | 0.8311 | 0.9642 |
| 0.0111 | 15.0 | 1560 | 0.1862 | 0.8160 | 0.8493 | 0.8323 | 0.9646 |
| 0.0111 | 16.0 | 1664 | 0.1989 | 0.8176 | 0.8367 | 0.8270 | 0.9627 |
| 0.0111 | 17.0 | 1768 | 0.1945 | 0.8246 | 0.8409 | 0.8327 | 0.9638 |
| 0.0111 | 18.0 | 1872 | 0.1997 | 0.8270 | 0.8426 | 0.8347 | 0.9634 |
| 0.0111 | 19.0 | 1976 | 0.1917 | 0.8258 | 0.8491 | 0.8373 | 0.9651 |
| 0.0051 | 20.0 | 2080 | 0.1955 | 0.8229 | 0.8465 | 0.8345 | 0.9645 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/bert-srb-ner
|
Aleksandar
|
bert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false | null | null |
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,031 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3561
- Precision: 0.8909
- Recall: 0.9082
- F1: 0.8995
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3907 | 1.0 | 625 | 0.2316 | 0.8255 | 0.8314 | 0.8285 | 0.9259 |
| 0.2091 | 2.0 | 1250 | 0.1920 | 0.8598 | 0.8731 | 0.8664 | 0.9420 |
| 0.1562 | 3.0 | 1875 | 0.1833 | 0.8608 | 0.8820 | 0.8713 | 0.9441 |
| 0.0919 | 4.0 | 2500 | 0.1985 | 0.8712 | 0.8886 | 0.8798 | 0.9476 |
| 0.0625 | 5.0 | 3125 | 0.2195 | 0.8762 | 0.8923 | 0.8842 | 0.9485 |
| 0.0545 | 6.0 | 3750 | 0.2320 | 0.8706 | 0.9004 | 0.8852 | 0.9495 |
| 0.0403 | 7.0 | 4375 | 0.2459 | 0.8817 | 0.8957 | 0.8887 | 0.9505 |
| 0.0269 | 8.0 | 5000 | 0.2603 | 0.8813 | 0.9021 | 0.8916 | 0.9516 |
| 0.0193 | 9.0 | 5625 | 0.2916 | 0.8812 | 0.8949 | 0.8880 | 0.9500 |
| 0.0162 | 10.0 | 6250 | 0.2938 | 0.8814 | 0.9025 | 0.8918 | 0.9520 |
| 0.0134 | 11.0 | 6875 | 0.3330 | 0.8809 | 0.8961 | 0.8885 | 0.9497 |
| 0.0076 | 12.0 | 7500 | 0.3141 | 0.8840 | 0.9025 | 0.8932 | 0.9524 |
| 0.0069 | 13.0 | 8125 | 0.3292 | 0.8819 | 0.9065 | 0.8940 | 0.9535 |
| 0.0053 | 14.0 | 8750 | 0.3454 | 0.8844 | 0.9018 | 0.8930 | 0.9523 |
| 0.0038 | 15.0 | 9375 | 0.3519 | 0.8912 | 0.9061 | 0.8986 | 0.9539 |
| 0.0034 | 16.0 | 10000 | 0.3437 | 0.8894 | 0.9038 | 0.8965 | 0.9539 |
| 0.0024 | 17.0 | 10625 | 0.3518 | 0.8896 | 0.9072 | 0.8983 | 0.9543 |
| 0.0018 | 18.0 | 11250 | 0.3572 | 0.8877 | 0.9072 | 0.8973 | 0.9543 |
| 0.0015 | 19.0 | 11875 | 0.3554 | 0.8910 | 0.9081 | 0.8994 | 0.9549 |
| 0.0011 | 20.0 | 12500 | 0.3561 | 0.8909 | 0.9082 | 0.8995 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/distilbert-srb-base-cased-oscar
|
Aleksandar
|
distilbert
| 10 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 889 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/distilbert-srb-ner-setimes
|
Aleksandar
|
distilbert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,020 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1838
- Precision: 0.8370
- Recall: 0.8617
- F1: 0.8492
- Accuracy: 0.9665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2319 | 0.6668 | 0.7029 | 0.6844 | 0.9358 |
| No log | 2.0 | 208 | 0.1850 | 0.7265 | 0.7508 | 0.7385 | 0.9469 |
| No log | 3.0 | 312 | 0.1584 | 0.7555 | 0.7937 | 0.7741 | 0.9538 |
| No log | 4.0 | 416 | 0.1484 | 0.7644 | 0.8128 | 0.7879 | 0.9571 |
| 0.1939 | 5.0 | 520 | 0.1383 | 0.7850 | 0.8131 | 0.7988 | 0.9604 |
| 0.1939 | 6.0 | 624 | 0.1409 | 0.7914 | 0.8359 | 0.8130 | 0.9632 |
| 0.1939 | 7.0 | 728 | 0.1526 | 0.8176 | 0.8392 | 0.8283 | 0.9637 |
| 0.1939 | 8.0 | 832 | 0.1536 | 0.8195 | 0.8409 | 0.8301 | 0.9641 |
| 0.1939 | 9.0 | 936 | 0.1538 | 0.8242 | 0.8523 | 0.8380 | 0.9661 |
| 0.0364 | 10.0 | 1040 | 0.1612 | 0.8228 | 0.8413 | 0.8319 | 0.9652 |
| 0.0364 | 11.0 | 1144 | 0.1721 | 0.8289 | 0.8503 | 0.8395 | 0.9656 |
| 0.0364 | 12.0 | 1248 | 0.1645 | 0.8301 | 0.8590 | 0.8443 | 0.9663 |
| 0.0364 | 13.0 | 1352 | 0.1747 | 0.8352 | 0.8540 | 0.8445 | 0.9665 |
| 0.0364 | 14.0 | 1456 | 0.1703 | 0.8277 | 0.8573 | 0.8422 | 0.9663 |
| 0.011 | 15.0 | 1560 | 0.1770 | 0.8314 | 0.8624 | 0.8466 | 0.9665 |
| 0.011 | 16.0 | 1664 | 0.1903 | 0.8399 | 0.8537 | 0.8467 | 0.9661 |
| 0.011 | 17.0 | 1768 | 0.1837 | 0.8363 | 0.8590 | 0.8475 | 0.9665 |
| 0.011 | 18.0 | 1872 | 0.1820 | 0.8338 | 0.8570 | 0.8453 | 0.9667 |
| 0.011 | 19.0 | 1976 | 0.1855 | 0.8382 | 0.8620 | 0.8499 | 0.9666 |
| 0.0053 | 20.0 | 2080 | 0.1838 | 0.8370 | 0.8617 | 0.8492 | 0.9665 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/distilbert-srb-ner
|
Aleksandar
|
distilbert
| 10 | 11 |
transformers
| 0 |
token-classification
| true | false | false | null |
['sr']
|
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,037 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2972
- Precision: 0.8871
- Recall: 0.9100
- F1: 0.8984
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 |
| 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 |
| 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 |
| 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 |
| 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 |
| 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 |
| 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 |
| 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 |
| 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 |
| 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 |
| 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 |
| 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 |
| 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 |
| 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 |
| 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 |
| 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 |
| 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 |
| 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 |
| 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 |
| 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/electra-srb-ner-setimes
|
Aleksandar
|
electra
| 10 | 12 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,017 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2804
- Precision: 0.8286
- Recall: 0.8081
- F1: 0.8182
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2981 | 0.6737 | 0.6113 | 0.6410 | 0.9174 |
| No log | 2.0 | 208 | 0.2355 | 0.7279 | 0.6701 | 0.6978 | 0.9307 |
| No log | 3.0 | 312 | 0.2079 | 0.7707 | 0.7062 | 0.7371 | 0.9402 |
| No log | 4.0 | 416 | 0.2078 | 0.7689 | 0.7479 | 0.7582 | 0.9449 |
| 0.2391 | 5.0 | 520 | 0.2089 | 0.8083 | 0.7476 | 0.7767 | 0.9484 |
| 0.2391 | 6.0 | 624 | 0.2199 | 0.7981 | 0.7726 | 0.7851 | 0.9487 |
| 0.2391 | 7.0 | 728 | 0.2528 | 0.8205 | 0.7749 | 0.7971 | 0.9511 |
| 0.2391 | 8.0 | 832 | 0.2265 | 0.8074 | 0.8003 | 0.8038 | 0.9524 |
| 0.2391 | 9.0 | 936 | 0.2843 | 0.8265 | 0.7716 | 0.7981 | 0.9504 |
| 0.0378 | 10.0 | 1040 | 0.2450 | 0.8024 | 0.8019 | 0.8021 | 0.9520 |
| 0.0378 | 11.0 | 1144 | 0.2550 | 0.8116 | 0.7986 | 0.8051 | 0.9519 |
| 0.0378 | 12.0 | 1248 | 0.2706 | 0.8208 | 0.7957 | 0.8081 | 0.9532 |
| 0.0378 | 13.0 | 1352 | 0.2664 | 0.8040 | 0.8035 | 0.8038 | 0.9530 |
| 0.0378 | 14.0 | 1456 | 0.2571 | 0.8011 | 0.8110 | 0.8060 | 0.9529 |
| 0.0099 | 15.0 | 1560 | 0.2673 | 0.8051 | 0.8129 | 0.8090 | 0.9534 |
| 0.0099 | 16.0 | 1664 | 0.2733 | 0.8074 | 0.8087 | 0.8081 | 0.9529 |
| 0.0099 | 17.0 | 1768 | 0.2835 | 0.8254 | 0.8074 | 0.8163 | 0.9543 |
| 0.0099 | 18.0 | 1872 | 0.2771 | 0.8222 | 0.8081 | 0.8151 | 0.9545 |
| 0.0099 | 19.0 | 1976 | 0.2776 | 0.8237 | 0.8084 | 0.8160 | 0.9546 |
| 0.0044 | 20.0 | 2080 | 0.2804 | 0.8286 | 0.8081 | 0.8182 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/electra-srb-ner
|
Aleksandar
|
electra
| 10 | 6 |
transformers
| 0 |
token-classification
| true | false | false | null | null |
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 3,034 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3406
- Precision: 0.8934
- Recall: 0.9087
- F1: 0.9010
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 |
| 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 |
| 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 |
| 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 |
| 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 |
| 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 |
| 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 |
| 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 |
| 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 |
| 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 |
| 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 |
| 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 |
| 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 |
| 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 |
| 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 |
| 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 |
| 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 |
| 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 |
| 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 |
| 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/electra-srb-oscar
|
Aleksandar
|
electra
| 10 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 875 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandra/herbert-base-cased-finetuned-squad
|
Aleksandra
|
bert
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,280 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 233 | 1.2474 |
| No log | 2.0 | 466 | 1.1951 |
| 1.3459 | 3.0 | 699 | 1.2071 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AlekseyKulnevich/Pegasus-HeaderGeneration
|
AlekseyKulnevich
|
pegasus
| 4 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,989 |
**Usage HuggingFace Transformers for header generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
headers = tokenizer.batch_decode(headers, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=20)
tokenizer.batch_decode(headers, skip_special_tokens=True)
```
output:
1. *the impact of climate change on tropical cyclones*
2. *the impact of human induced climate change on tropical cyclones*
3. *the impact of climate change on tropical cyclone formation in the midlatitudes*
4. *how climate change will expand the range of tropical cyclones?*
5. *the impact of climate change on tropical cyclones in the midlatitudes*
6. *global warming will expand the range of tropical cyclones*
7. *climate change will expand the range of tropical cyclones*
8. *the impact of climate change on tropical cyclone formation*
9. *the impact of human induced climate change on tropical cyclone formation*
10. *tropical cyclones in the mid-latitudes*
11. *climate change will expand the range of tropical cyclones in the middle latitudes*
12. *global warming will expand the range of tropical cyclones, a new study says*
13. *the impacts of climate change on tropical cyclones*
14. *the impact of global warming on tropical cyclones*
15. *climate change will expand the range of tropical cyclones, a new study says*
16. *global warming will expand the range of tropical cyclones in the middle latitudes*
17. *the effects of climate change on tropical cyclones*
18. *how climate change will expand the range of tropical cyclones*
19. *climate change will expand the range of tropical cyclones over the equator*
20. *the impact of human induced climate change on tropical cyclones.*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
|
AlekseyKulnevich/Pegasus-QuestionGeneration
|
AlekseyKulnevich
|
pegasus
| 4 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 3,797 |
**Usage HuggingFace Transformers for question generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
questions = tokenizer.batch_decode(questions, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
tokenizer.batch_decode(questions, skip_special_tokens=True)
```
output:
1. *What is the impact of human induced climate change on tropical cyclones?*
2. *What is the impact of climate change on tropical cyclones?*
3. *What is the impact of human induced climate change on tropical cyclone formation?*
4. *How many tropical cyclones will occur in the mid-latitudes?*
5. *What is the impact of climate change on the formation of tropical cyclones?*
6. *Is it possible for a tropical cyclone to form in the middle latitudes?*
7. *How many tropical cyclones will be formed in the mid-latitudes?*
8. *How many tropical cyclones will there be in the mid-latitudes?*
9. *How many tropical cyclones will form in the mid-latitudes?*
10. *What is the impact of global warming on tropical cyclones?*
11. *How long does it take for a tropical cyclone to form?*
12. 'What are the impacts of climate change on tropical cyclones?*
13. *What are the effects of climate change on tropical cyclones?*
14. *How many tropical cyclones will be able to form in the middle latitudes?*
15. *What is the impact of climate change on tropical cyclone formation?*
16. *What is the effect of climate change on tropical cyclones?*
17. *How long does it take for a tropical cyclone to form in the middle latitude?*
18. *How many tropical cyclones will occur in the middle latitudes?*
19. *How many tropical cyclones are likely to form in the midlatitudes?*
20. *How many tropical cyclones are likely to form in the middle latitudes?*
21. *How many tropical cyclones are expected to form in the midlatitudes?*
22. *How many tropical cyclones will be formed in the middle latitudes?*
23. *How many tropical cyclones will there be in the middle latitudes?*
24. *How long will it take for a tropical cyclone to form in the middle latitude?*
25. *What is the impact of global warming on tropical cyclone formation?*
26. *How many tropical cyclones will form in the middle latitudes?*
27. *How many tropical cyclones can we expect to form in the middle latitudes?*
28. *Is it possible for a tropical cyclone to form in the middle latitude?*
29. *What is the effect of climate change on tropical cyclone formation?*
30. *What are the effects of climate change on tropical cyclone formation?*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
|
AlekseyKulnevich/Pegasus-Summarization
|
AlekseyKulnevich
|
pegasus
| 4 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 3,156 |
**Usage HuggingFace Transformers for summarization task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-Summarization")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
summary = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
min_length=100,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
questions = tokenizer.batch_decode(summary, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
summary = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
min_length=100,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=1)
tokenizer.batch_decode(summary, skip_special_tokens=True)
```
output:
1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.*
```
summary = model.generate(input_ids=input_ids,
attention_mask=input_mask,
top_k=30,
no_repeat_ngram_size=2,
early_stopping=True,
min_length=100,
num_return_sequences=1)
tokenizer.batch_decode(summary, skip_special_tokens=True)
```
output:
1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
|
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
|
AlexKay
|
xlm-roberta
| 8 | 960 |
transformers
| 15 |
question-answering
| true | false | false |
apache-2.0
|
['en', 'ru', 'multilingual']
| null | null | 1 | 1 | 0 | 0 | 1 | 0 | 1 |
[]
| false | true | true | 416 |
# XLM-RoBERTa large model whole word masking finetuned on SQuAD
Pretrained model using a masked language modeling (MLM) objective.
Fine tuned on English and Russian QA datasets
## Used QA Datasets
SQuAD + SberQuAD
[SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read!
## Evaluation results
The results obtained are the following (SberQUaD):
```
f1 = 84.3
exact_match = 65.3
|
AlexMaclean/sentence-compression-roberta
|
AlexMaclean
|
roberta
| 10 | 5 |
transformers
| 1 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,555 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression-roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3465
- Accuracy: 0.8473
- F1: 0.6835
- Precision: 0.6835
- Recall: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 |
| 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 |
| 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AlexMaclean/sentence-compression
|
AlexMaclean
|
distilbert
| 12 | 17 |
transformers
| 1 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,570 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.8912
- F1: 0.8367
- Precision: 0.8495
- Recall: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 |
| 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 |
| 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AlexN/xls-r-300m-fr-0
|
AlexN
|
wav2vec2
| 38 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 2,900 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Wer: 0.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3748 | 0.07 | 500 | 3.8784 | 1.0 |
| 2.8068 | 0.14 | 1000 | 2.8289 | 0.9826 |
| 1.6698 | 0.22 | 1500 | 0.8811 | 0.7127 |
| 1.3488 | 0.29 | 2000 | 0.5166 | 0.5369 |
| 1.2239 | 0.36 | 2500 | 0.4105 | 0.4741 |
| 1.1537 | 0.43 | 3000 | 0.3585 | 0.4448 |
| 1.1184 | 0.51 | 3500 | 0.3336 | 0.4292 |
| 1.0968 | 0.58 | 4000 | 0.3195 | 0.4180 |
| 1.0737 | 0.65 | 4500 | 0.3075 | 0.4141 |
| 1.0677 | 0.72 | 5000 | 0.3015 | 0.4089 |
| 1.0462 | 0.8 | 5500 | 0.2971 | 0.4077 |
| 1.0392 | 0.87 | 6000 | 0.2870 | 0.3997 |
| 1.0178 | 0.94 | 6500 | 0.2805 | 0.3963 |
| 0.992 | 1.01 | 7000 | 0.2748 | 0.3935 |
| 1.0197 | 1.09 | 7500 | 0.2691 | 0.3884 |
| 1.0056 | 1.16 | 8000 | 0.2682 | 0.3889 |
| 0.9826 | 1.23 | 8500 | 0.2647 | 0.3868 |
| 0.9815 | 1.3 | 9000 | 0.2603 | 0.3832 |
| 0.9717 | 1.37 | 9500 | 0.2561 | 0.3807 |
| 0.9605 | 1.45 | 10000 | 0.2523 | 0.3783 |
| 0.96 | 1.52 | 10500 | 0.2494 | 0.3788 |
| 0.9442 | 1.59 | 11000 | 0.2478 | 0.3760 |
| 0.9564 | 1.66 | 11500 | 0.2454 | 0.3733 |
| 0.9436 | 1.74 | 12000 | 0.2439 | 0.3747 |
| 0.938 | 1.81 | 12500 | 0.2411 | 0.3716 |
| 0.9353 | 1.88 | 13000 | 0.2397 | 0.3698 |
| 0.9271 | 1.95 | 13500 | 0.2388 | 0.3681 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
AlexN/xls-r-300m-fr
|
AlexN
|
wav2vec2
| 134 | 6 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false | null |
['fr']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 1,021 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2700
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
AlexN/xls-r-300m-pt
|
AlexN
|
wav2vec2
| 53 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'robust-speech-event', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hf-asr-leaderboard']
| true | true | true | 2,657 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Wer: 0.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0952 | 0.64 | 500 | 3.0982 | 1.0 |
| 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 |
| 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 |
| 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 |
| 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 |
| 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 |
| 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 |
| 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 |
| 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 |
| 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 |
| 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 |
| 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 |
| 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 |
| 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 |
| 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 |
| 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 |
| 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 |
| 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 |
| 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 |
| 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 |
| 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 |
| 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 |
| 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Alireza1044/albert-base-v2-cola
|
Alireza1044
|
albert
| 16 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,006 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7552
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-mnli
|
Alireza1044
|
albert
| 14 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 994 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5383
- Accuracy: 0.8501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-mrpc
|
Alireza1044
|
albert
| 16 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,032 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Accuracy: 0.8627
- F1: 0.9011
- Combined Score: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-qnli
|
Alireza1044
|
albert
| 20 | 20 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 994 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-qqp
|
Alireza1044
|
albert
| 14 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,030 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3695
- Accuracy: 0.9050
- F1: 0.8723
- Combined Score: 0.8886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-rte
|
Alireza1044
|
albert
| 16 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 992 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7994
- Accuracy: 0.6859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-sst2
|
Alireza1044
|
albert
| 16 | 16 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 994 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3808
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-stsb
|
Alireza1044
|
albert
| 16 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,038 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3978
- Pearson: 0.9090
- Spearmanr: 0.9051
- Combined Score: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/albert-base-v2-wnli
|
Alireza1044
|
albert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 994 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6898
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Alireza1044/bert_classification_lm
|
Alireza1044
|
bert
| 8 | 1 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 990 |
A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues.
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow" colspan="2">Label Definitions</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow">Label 0</td>
<td class="tg-c3ow">Michael</td>
</tr>
<tr>
<td class="tg-c3ow">Label 1</td>
<td class="tg-c3ow">Dwight</td>
</tr>
</tbody>
</table>
|
Aloka/mbart50-ft-si-en
|
Aloka
|
mbart
| 12 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,633 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-ft-si-en
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 30 | 5.6367 |
| No log | 1.98 | 60 | 4.1221 |
| No log | 2.98 | 90 | 3.1880 |
| No log | 3.98 | 120 | 3.1175 |
| No log | 4.98 | 150 | 3.3575 |
| No log | 5.98 | 180 | 3.7855 |
| No log | 6.98 | 210 | 4.3530 |
| No log | 7.98 | 240 | 4.7216 |
| No log | 8.98 | 270 | 4.9202 |
| No log | 9.98 | 300 | 5.0476 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.6.0
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Alstractor/distilbert-base-uncased-finetuned-cola
|
Alstractor
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7272
- Matthews Correlation: 0.5343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 |
| 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 |
| 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 |
| 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 |
| 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Alvenir/wav2vec2-base-da
|
Alvenir
|
wav2vec2
| 4 | 61 |
transformers
| 4 | null | true | false | false |
apache-2.0
|
['da']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speech']
| false | true | true | 808 |
# Wav2vec2-base for Danish
This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.
This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz.
The pre-training was done using the fairseq library in January 2021.
It needs to be fine-tuned to perform speech recognition.
# Finetuning
In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
|
Amalq/distilbert-base-uncased-finetuned-cola
|
Amalq
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,572 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
- Matthews Correlation: 0.5335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5315 | 1.0 | 535 | 0.5214 | 0.4009 |
| 0.354 | 2.0 | 1070 | 0.5275 | 0.4857 |
| 0.2396 | 3.0 | 1605 | 0.6610 | 0.4901 |
| 0.1825 | 4.0 | 2140 | 0.7570 | 0.5335 |
| 0.1271 | 5.0 | 2675 | 0.8923 | 0.5074 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Amalq/roberta-base-finetuned-schizophreniaReddit2
|
Amalq
|
roberta
| 9 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,361 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-schizophreniaReddit2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 490 | 1.8093 |
| 1.9343 | 2.0 | 980 | 1.7996 |
| 1.8856 | 3.0 | 1470 | 1.7966 |
| 1.8552 | 4.0 | 1960 | 1.7844 |
| 1.8267 | 5.0 | 2450 | 1.7839 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AmazonScience/qanlu
|
AmazonScience
|
roberta
| 9 | 242 |
transformers
| 3 |
question-answering
| true | false | false |
cc-by-4.0
|
['en']
|
['atis']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,748 |
# Question Answering NLU
Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering,
leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of
training an intent classifier or a slot tagger, for example, we can ask the model intent- and
slot-related questions in natural language:
```
Context : Yes. No. I'm looking for a cheap flight to Boston.
Question: Is the user looking to book a flight?
Answer : Yes
Question: Is the user asking about departure time?
Answer : No
Question: What price is the user looking for?
Answer : cheap
Question: Where is the user flying from?
Answer : (empty)
```
Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?").
Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: [Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf).
## Model training
Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the [Amazon Science repository](https://github.com/amazon-research/question-answering-nlu).
## Intended use and limitations
This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned
on relevant data.
## Use in transformers:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer)
qa_input = {
'context': 'Yes. No. I want a cheap flight to Boston.',
'question': 'What is the destination?'
}
answer = qa_pipeline(qa_input)
```
## Citation
If you use this work, please cite:
```
@inproceedings{namazifar2021language,
title={Language model is all you need: Natural language understanding as question answering},
author={Namazifar, Mahdi and Papangelis, Alexandros and Tur, Gokhan and Hakkani-T{\"u}r, Dilek},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7803--7807},
year={2021},
organization={IEEE}
}
```
## License
This library is licensed under the CC BY NC License.
|
Amrrs/indian-foods
|
Amrrs
|
vit
| 11 | 7 |
transformers
| 2 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 572 |
# indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### idli

#### kachori

#### pani puri

#### samosa

#### vada pav

|
Amrrs/south-indian-foods
|
Amrrs
|
vit
| 11 | 13 |
transformers
| 0 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 560 |
# south-indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dosai

#### idiyappam

#### idli

#### puttu

#### vadai

|
Amrrs/wav2vec2-large-xlsr-53-tamil
|
Amrrs
|
wav2vec2
| 10 | 43 |
transformers
| 1 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['ta']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,368 |
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 82.94 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
|
Anamika/autonlp-Feedback1-479512837
|
Anamika
|
xlm-roberta
| 9 | 3 |
transformers
| 0 |
text-classification
| true | false | false | null |
['unk']
|
['Anamika/autonlp-data-Feedback1']
|
123.88023112815048
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
autonlp
| false | true | true | 1,218 |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 479512837
- CO2 Emissions (in grams): 123.88023112815048
## Validation Metrics
- Loss: 0.6220805048942566
- Accuracy: 0.7961119332705503
- Macro F1: 0.7616345204219084
- Micro F1: 0.7961119332705503
- Weighted F1: 0.795387503907883
- Macro Precision: 0.782839455262034
- Micro Precision: 0.7961119332705503
- Weighted Precision: 0.7992606754484262
- Macro Recall: 0.7451485972167191
- Micro Recall: 0.7961119332705503
- Weighted Recall: 0.7961119332705503
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Anamika/autonlp-fa-473312409
|
Anamika
|
roberta
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['Anamika/autonlp-data-fa']
|
25.128735714898614
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
autonlp
| false | true | true | 1,198 |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 473312409
- CO2 Emissions (in grams): 25.128735714898614
## Validation Metrics
- Loss: 0.6010786890983582
- Accuracy: 0.7990650945370823
- Macro F1: 0.7429662929144928
- Micro F1: 0.7990650945370823
- Weighted F1: 0.7977660363770382
- Macro Precision: 0.7744390888231261
- Micro Precision: 0.7990650945370823
- Weighted Precision: 0.800444194278352
- Macro Recall: 0.7198278524814119
- Micro Recall: 0.7990650945370823
- Weighted Recall: 0.7990650945370823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Andranik/TestQA2
|
Andranik
|
electra
| 7 | 3 |
transformers
| 0 |
question-answering
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 998 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra_large_discriminator_squad2_512
This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AndrewChar/model-QA-5-epoch-RU
|
AndrewChar
|
distilbert
| 8 | 217 |
transformers
| 5 |
question-answering
| false | true | false | null |
['ru']
|
['sberquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,442 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# model-QA-5-epoch-RU
This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad
dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1991
- Validation Loss: 0.0
- Epoch: 5
## Model description
Модель отвечающая на вопрос по контектсу
это дипломная работа
## Intended uses & limitations
Контекст должен содержать не более 512 токенов
## Training and evaluation data
DataSet SberSQuAD
{'exact_match': 54.586, 'f1': 73.644}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1991 | | 5 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AndrewMcDowell/wav2vec2-xls-r-1B-german
|
AndrewMcDowell
|
wav2vec2
| 38 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'de', 'hf-asr-leaderboard']
| true | true | true | 3,887 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- Wer: 0.1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0826 | 0.07 | 1000 | 0.4637 | 0.4654 |
| 1.118 | 0.15 | 2000 | 0.2595 | 0.2687 |
| 1.1268 | 0.22 | 3000 | 0.2635 | 0.2661 |
| 1.0919 | 0.29 | 4000 | 0.2417 | 0.2566 |
| 1.1013 | 0.37 | 5000 | 0.2414 | 0.2567 |
| 1.0898 | 0.44 | 6000 | 0.2546 | 0.2731 |
| 1.0808 | 0.51 | 7000 | 0.2399 | 0.2535 |
| 1.0719 | 0.59 | 8000 | 0.2353 | 0.2528 |
| 1.0446 | 0.66 | 9000 | 0.2427 | 0.2545 |
| 1.0347 | 0.73 | 10000 | 0.2266 | 0.2402 |
| 1.0457 | 0.81 | 11000 | 0.2290 | 0.2448 |
| 1.0124 | 0.88 | 12000 | 0.2295 | 0.2448 |
| 1.025 | 0.95 | 13000 | 0.2138 | 0.2345 |
| 1.0107 | 1.03 | 14000 | 0.2108 | 0.2294 |
| 0.9758 | 1.1 | 15000 | 0.2019 | 0.2204 |
| 0.9547 | 1.17 | 16000 | 0.2000 | 0.2178 |
| 0.986 | 1.25 | 17000 | 0.2018 | 0.2200 |
| 0.9588 | 1.32 | 18000 | 0.1992 | 0.2138 |
| 0.9413 | 1.39 | 19000 | 0.1898 | 0.2049 |
| 0.9339 | 1.47 | 20000 | 0.1874 | 0.2056 |
| 0.9268 | 1.54 | 21000 | 0.1797 | 0.1976 |
| 0.9194 | 1.61 | 22000 | 0.1743 | 0.1905 |
| 0.8987 | 1.69 | 23000 | 0.1738 | 0.1932 |
| 0.8884 | 1.76 | 24000 | 0.1703 | 0.1873 |
| 0.8939 | 1.83 | 25000 | 0.1633 | 0.1831 |
| 0.8629 | 1.91 | 26000 | 0.1549 | 0.1750 |
| 0.8607 | 1.98 | 27000 | 0.1550 | 0.1738 |
| 0.8316 | 2.05 | 28000 | 0.1512 | 0.1709 |
| 0.8321 | 2.13 | 29000 | 0.1481 | 0.1657 |
| 0.825 | 2.2 | 30000 | 0.1446 | 0.1627 |
| 0.8115 | 2.27 | 31000 | 0.1396 | 0.1583 |
| 0.7959 | 2.35 | 32000 | 0.1389 | 0.1569 |
| 0.7835 | 2.42 | 33000 | 0.1362 | 0.1545 |
| 0.7959 | 2.49 | 34000 | 0.1355 | 0.1531 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
AndrewMcDowell/wav2vec2-xls-r-1b-arabic
|
AndrewMcDowell
|
wav2vec2
| 23 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer']
| true | true | true | 3,447 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1373
- Wer: 0.8607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 |
| 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 |
| 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 |
| 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 |
| 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 |
| 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 |
| 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 |
| 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 |
| 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 |
| 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 |
| 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 |
| 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 |
| 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 |
| 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 |
| 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 |
| 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 |
| 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 |
| 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 |
| 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 |
| 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 |
| 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 |
| 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 |
| 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 |
| 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 |
| 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 |
| 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 |
| 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 |
| 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 |
| 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 |
| 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 |
| 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 |
| 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 |
| 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 |
| 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 |
| 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
|
AndrewMcDowell
|
wav2vec2
| 35 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'ja', 'hf-asr-leaderboard']
| true | true | true | 2,139 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 1.0132
- Cer: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 |
| 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 |
| 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
AndrewMcDowell/wav2vec2-xls-r-300m-arabic
|
AndrewMcDowell
|
wav2vec2
| 25 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ar', 'automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
| true | true | true | 2,717 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Wer: 0.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7972 | 0.43 | 500 | 5.1401 | 1.0 |
| 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 |
| 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 |
| 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 |
| 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 |
| 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 |
| 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 |
| 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 |
| 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 |
| 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 |
| 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 |
| 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 |
| 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 |
| 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 |
| 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 |
| 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 |
| 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 |
| 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 |
| 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 |
| 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 |
| 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 |
| 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 |
| 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
|
AndrewMcDowell
|
wav2vec2
| 28 | 8 |
transformers
| 2 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
| true | true | true | 6,772 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
eval results:
WER: 0.20161578657865786
CER: 0.05062357805269733
-->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- Wer: 0.2016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.7531 | 0.04 | 500 | 5.4564 | 1.0 |
| 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 |
| 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 |
| 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 |
| 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 |
| 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 |
| 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 |
| 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 |
| 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 |
| 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 |
| 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 |
| 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 |
| 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 |
| 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 |
| 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 |
| 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 |
| 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 |
| 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 |
| 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 |
| 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 |
| 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 |
| 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 |
| 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 |
| 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 |
| 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 |
| 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 |
| 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 |
| 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 |
| 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 |
| 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 |
| 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 |
| 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 |
| 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 |
| 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 |
| 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 |
| 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 |
| 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 |
| 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 |
| 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 |
| 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 |
| 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 |
| 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 |
| 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 |
| 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 |
| 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 |
| 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 |
| 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 |
| 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 |
| 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 |
| 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 |
| 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 |
| 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 |
| 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 |
| 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 |
| 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 |
| 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 |
| 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 |
| 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 |
| 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 |
| 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 |
| 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 |
| 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 |
| 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 |
| 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 |
| 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 |
| 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 |
| 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 |
| 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 |
| 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 |
| 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 |
| 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 |
| 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 |
| 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 |
| 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 |
| 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 |
| 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 |
| 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 |
| 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 |
| 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 |
| 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.