--- library_name: transformers license: apache-2.0 base_model: google/mt5-small tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2517 - Rouge1: 14.7495 - Rouge2: 7.2226 - Rougel: 14.0109 - Rougelsum: 14.1583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 0.3426 | 1.0 | 838 | 0.2712 | 12.6317 | 6.3377 | 12.1288 | 12.2498 | | 0.3091 | 2.0 | 1676 | 0.2616 | 12.6631 | 6.1716 | 12.439 | 12.5201 | | 0.2878 | 3.0 | 2514 | 0.2570 | 14.477 | 7.5074 | 13.7856 | 13.9647 | | 0.2766 | 4.0 | 3352 | 0.2551 | 14.132 | 7.2241 | 13.6744 | 13.831 | | 0.2686 | 5.0 | 4190 | 0.2523 | 14.324 | 6.4759 | 13.7568 | 13.8855 | | 0.2631 | 6.0 | 5028 | 0.2522 | 15.5031 | 7.5588 | 14.7522 | 14.9207 | | 0.2594 | 7.0 | 5866 | 0.2513 | 15.0739 | 7.4589 | 14.2674 | 14.4301 | | 0.2561 | 8.0 | 6704 | 0.2517 | 14.7495 | 7.2226 | 14.0109 | 14.1583 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.21.0