kiviki's picture
update model card README.md
7442d6f
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: mbart-large-50-pluska-token-sum
    results: []

mbart-large-50-pluska-token-sum

This model is a fine-tuned version of facebook/mbart-large-50 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.2944
  • Rouge1: 16.8143
  • Rouge2: 5.2102
  • Rougel: 14.3454
  • Rougelsum: 14.7355

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
No log 1.0 144 3.7149 17.1082 5.3773 14.6384 14.9023
No log 2.0 288 3.4301 17.6442 5.7235 14.9728 15.3384
No log 3.0 432 3.4951 17.491 5.5463 14.8284 15.2035
2.9135 4.0 576 3.6706 17.1471 5.3419 14.6302 14.9667
2.9135 5.0 720 3.9122 16.6979 5.1895 14.2909 14.6367
2.9135 6.0 864 4.0815 16.5961 5.1915 14.2459 14.5916
2.9135 7.0 1008 4.2448 16.7793 5.2088 14.3515 14.7136
1.0097 8.0 1152 4.2944 16.8143 5.2102 14.3454 14.7355

Framework versions

  • Transformers 4.27.0
  • Pytorch 2.0.0
  • Datasets 2.10.1
  • Tokenizers 0.13.2