indosum-seq_bn-rf64-0

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5046
  • Rouge1: 72.7451
  • Rouge2: 65.6426
  • Rougel: 69.7965
  • Rougelsum: 71.8443
  • Gen Len: 103.1187

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.8386 1.0 892 0.5658 68.0586 60.6185 65.0879 67.0846 102.556
0.646 2.0 1784 0.5346 69.6096 62.3885 66.6327 68.7343 107.088
0.6031 3.0 2676 0.5019 70.498 63.0668 67.3204 69.5075 101.6693
0.5753 4.0 3568 0.5093 71.6759 64.4776 68.7095 70.7692 104.52
0.5551 5.0 4460 0.5046 72.0617 64.9137 69.0991 71.1205 102.5733

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for apwic/indosum-seq_bn-rf64-0

Finetuned
(53)
this model