|
--- |
|
language: es |
|
tags: |
|
- T5 |
|
- Seq2Seq |
|
- EconderDecoder |
|
- Spanish |
|
datasets: |
|
- large_spanish_corpus |
|
widgets: |
|
- text: "Érase un vez un" |
|
|
|
license: mit |
|
|
|
--- |
|
# Spanish T5 (small) trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus). |
|
|
|
This is a Spanish **T5** (small arch) trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax) |
|
|
|
This is part of the |
|
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. |
|
## Dataset |
|
The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation. |
|
|
|
## [Metrics](https://huggingface.co/flax-community/spanish-t5-small/tensorboard) (on evaluation dataset) |
|
|
|
- Accuracy: 0.675 |
|
|
|
|
|
## Team members |
|
- Manuel Romero ([mrm8488](https://huggingface.co/mrm8488)) |
|
- María Grandury ([mariagrandury](https://huggingface.co/mariagrandury)) |
|
|
|
## Citation |
|
If you want to cite this model you can use this: |
|
|
|
```bibtex |
|
@misc{mromero2021spanish-t5-small, |
|
title={Spanish T5 (small) by Manuel Romero}, |
|
author={Romero, Manuel}, |
|
publisher={Hugging Face}, |
|
journal={Hugging Face Hub}, |
|
howpublished={\url{https://huggingface.co/flax-community/spanish-t5-small}}, |
|
year={2021} |
|
} |
|
``` |