--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-base-spanish-yoremnokki results: [] --- # t5-base-spanish-yoremnokki This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7231 - Bleu: 13.837 - Gen Len: 14.1189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:| | 3.5123 | 0.9994 | 846 | 2.3345 | 0.2307 | 14.7029 | | 2.4145 | 2.0 | 1693 | 2.0424 | 1.9545 | 14.144 | | 2.1669 | 2.9994 | 2539 | 1.8778 | 9.3899 | 14.1364 | | 2.0852 | 4.0 | 3386 | 1.7938 | 13.1303 | 14.0983 | | 1.9892 | 4.9994 | 4232 | 1.7520 | 13.5863 | 14.1249 | | 1.9364 | 6.0 | 5079 | 1.7295 | 13.7623 | 14.1375 | | 1.9286 | 6.9959 | 5922 | 1.7231 | 13.837 | 14.1189 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3