--- base_model: UBC-NLP/AraT5v2-base-1024 library_name: peft metrics: - bleu - rouge tags: - generated_from_trainer model-index: - name: finetune-t5-base-on-opus100-Ar2En-with-lora results: [] --- [Visualize in Weights & Biases](https://wandb.ai/FinalProject_/T5/runs/s0b8or22) # finetune-t5-base-on-opus100-Ar2En-with-lora This model is a fine-tuned version of [UBC-NLP/AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6399 - Bleu: 4.8897 - Rouge: 0.2479 - Gen Len: 10.622 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:| | 15.1403 | 1.0 | 700 | 5.2590 | 1.1467 | 0.1085 | 11.17 | | 6.6309 | 2.0 | 1400 | 4.6547 | 3.1579 | 0.2258 | 10.297 | | 5.9949 | 3.0 | 2100 | 4.0112 | 3.4311 | 0.2226 | 10.095 | | 5.2808 | 4.0 | 2800 | 3.8936 | 3.7676 | 0.2331 | 10.5535 | | 5.0299 | 5.0 | 3500 | 3.8082 | 3.8261 | 0.2272 | 10.4065 | | 4.893 | 6.0 | 4200 | 3.7392 | 4.2894 | 0.235 | 10.426 | | 4.7854 | 7.0 | 4900 | 3.6929 | 4.8132 | 0.2422 | 10.4765 | | 4.7114 | 8.0 | 5600 | 3.6592 | 4.9749 | 0.2477 | 10.6395 | | 4.6835 | 9.0 | 6300 | 3.6455 | 4.9315 | 0.2475 | 10.584 | | 4.6606 | 10.0 | 7000 | 3.6399 | 4.8897 | 0.2479 | 10.622 | ### Framework versions - PEFT 0.12.0 - Transformers 4.42.3 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1