DialoGPT-large-faqs-block-size-128-bs-16-lr-5e-5

This model is a fine-tuned version of microsoft/DialoGPT-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3741

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 40 2.6793
No log 2.0 80 2.3038
No log 3.0 120 2.2566
No log 4.0 160 2.3382
No log 5.0 200 2.5499
No log 6.0 240 2.6927
No log 7.0 280 2.8513
No log 8.0 320 2.9774
No log 9.0 360 3.0255
No log 10.0 400 3.1119
No log 11.0 440 3.1643
No log 12.0 480 3.2005
0.9696 13.0 520 3.2673
0.9696 14.0 560 3.2855
0.9696 15.0 600 3.3351
0.9696 16.0 640 3.3462
0.9696 17.0 680 3.3375
0.9696 18.0 720 3.3614
0.9696 19.0 760 3.3648
0.9696 20.0 800 3.3741

Framework versions

  • Transformers 4.33.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4.dev0
  • Tokenizers 0.13.3
Downloads last month
19
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DrishtiSharma/DialoGPT-large-faqs-block-size-128-bs-16-lr-5e-5

Finetuned
(20)
this model