This is a smaller version of the google/mt5-base model with only Spanish embeddings left.
- The original model has 582M parameters, with 237M of them being input and output embeddings.
- After shrinking the
sentencepiece
vocabulary from 250K to 25K (top 25K Spanish tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one.
Citing & Authors
- Datasets : cleaned corpora
- Model : google/mt5-base
- Reference: cointegrated/rut5-base
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.