xsum-gpt2-long / README.md
qqplot23's picture
Model save
50de753
metadata
license: mit
base_model: gpt2
tags:
  - generated_from_trainer
model-index:
  - name: xsum-gpt2-long
    results: []

xsum-gpt2-long

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0751
  • Ppl: 22.3764

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 22554
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2000
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Ppl
3.662 2.5 4000 3.5005 34.2753
3.3365 4.99 8000 3.2677 27.1321
3.2005 7.49 12000 3.1715 24.6352
3.1102 9.98 16000 3.1144 23.2719
3.0517 12.48 20000 3.0830 22.5529
3.0267 14.97 24000 3.0751 22.3764

Framework versions

  • Transformers 4.35.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.14.6
  • Tokenizers 0.14.1