liputan6-unipelt / README.md
apwic's picture
End of training
e9ea7aa verified
metadata
language:
  - id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
  - generated_from_trainer
datasets:
  - id_liputan6
metrics:
  - rouge
model-index:
  - name: liputan6-unipelt
    results:
      - task:
          name: Summarization
          type: summarization
        dataset:
          name: id_liputan6 canonical
          type: id_liputan6
          config: canonical
          split: validation
          args: canonical
        metrics:
          - name: Rouge1
            type: rouge
            value: 1.8031

liputan6-unipelt

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5645
  • Rouge1: 1.8031
  • Rouge2: 0.4028
  • Rougel: 1.5585
  • Rougelsum: 1.6132
  • Gen Len: 127.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
3.9747 1.0 63 3.1043 3.9543 1.0191 3.7375 3.7922 127.0
3.0262 2.0 126 2.7314 5.0276 1.3105 4.1292 4.3574 127.0
2.6214 3.0 189 2.5645 5.2587 1.2673 3.8487 4.3728 127.0
2.3496 4.0 252 2.4158 4.4309 0.9142 3.2152 3.5296 127.0
2.1749 5.0 315 2.3672 5.0669 1.0704 3.6335 4.1011 127.0

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1