results
This model is a fine-tuned version of facebook/bart-large-cnn on the None dataset. It achieves the following results on the evaluation set:
- Loss: 7.6783
- Rouge1: 22.6282
- Rouge2: 2.7613
- Rougel: 15.7792
- Gen Len: 66.1839
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Gen Len |
---|---|---|---|---|---|---|---|
7.8707 | 0.5754 | 500 | 7.7020 | 22.5587 | 2.5824 | 15.5739 | 64.7494 |
7.7352 | 1.1507 | 1000 | 7.4834 | 22.4958 | 2.3149 | 15.6863 | 69.8138 |
7.2829 | 1.7261 | 1500 | 6.9061 | 20.4987 | 2.3443 | 15.3680 | 98.8828 |
7.0328 | 2.3015 | 2000 | 6.9069 | 20.6740 | 2.2923 | 15.6237 | 108.1402 |
6.9828 | 2.8769 | 2500 | 6.8401 | 21.0086 | 2.2733 | 15.7925 | 106.2667 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 125
Model tree for pendar02/results
Base model
facebook/bart-large-cnn