|
## Model and data descriptions |
|
|
|
This is a wav2vec 2.0 base model pre-trained on 243 hours of Tamasheq speech from the corpus presented in [Boito et al., 2022](https://arxiv.org/abs/2201.05051). |
|
**This is not an ASR fine-tuned model. There is no vocabulary file.** |
|
|
|
## Intended uses & limitations |
|
|
|
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. |
|
|
|
## Referencing our IWSLT models |
|
|
|
``` |
|
@article{boito2022trac, |
|
title={ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks}, |
|
author={Boito, Marcely Zanon and Ortega, John and Riguidel, Hugo and Laurent, Antoine and Barrault, Lo{\"\i}c and Bougares, Fethi and Chaabani, Firas and Nguyen, Ha and Barbier, Florentin and Gahbiche, Souhir and others}, |
|
journal={IWSLT}, |
|
year={2022} |
|
} |
|
``` |