Edit model card

liputan6-seq_bn-rf64

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2748
  • Rouge1: 43.4905
  • Rouge2: 34.6942
  • Rougel: 39.8342
  • Rougelsum: 42.0699
  • Gen Len: 62.292

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.1116 1.0 63 0.3953 39.3374 31.1269 36.447 37.9075 49.406
0.5774 2.0 126 0.3309 43.6615 34.9524 40.0101 42.2478 55.7
0.4729 3.0 189 0.2864 41.4092 32.5453 37.9684 39.9099 52.999
0.4209 4.0 252 0.2845 43.3834 34.6436 39.8104 41.9569 56.612
0.3931 5.0 315 0.2748 43.7351 34.936 40.0408 42.3437 58.611

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for apwic/liputan6-seq_bn-rf64

Finetuned
(53)
this model

Dataset used to train apwic/liputan6-seq_bn-rf64

Evaluation results