random_transcript_conv

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2220

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
4.7264 0.0254 1000 4.4961
4.2766 0.0508 2000 4.2143
4.1231 0.0762 3000 4.0323
4.0307 0.1016 4000 3.9221
3.8887 0.1270 5000 3.8578
3.8689 0.1524 6000 3.7800
3.7808 0.1778 7000 3.7245
3.742 0.2032 8000 3.6854
3.7303 0.2285 9000 3.6259
3.5985 0.2539 10000 3.6000
3.6448 0.2793 11000 3.5646
3.6531 0.3047 12000 3.5310
3.463 0.3301 13000 3.5120
3.5609 0.3555 14000 3.4827
3.5348 0.3809 15000 3.4513
3.4552 0.4063 16000 3.4491
3.4829 0.4317 17000 3.4177
3.4333 0.4571 18000 3.3998
3.4369 0.4825 19000 3.3927
3.4465 0.5079 20000 3.3694
3.2959 0.5333 21000 3.3755
3.3914 0.5587 22000 3.3508
3.419 0.5841 23000 3.3296
3.2619 0.6095 24000 3.3346
3.3485 0.6349 25000 3.3173
3.3355 0.6603 26000 3.3090
3.3004 0.6856 27000 3.3027
3.3105 0.7110 28000 3.2894
3.2625 0.7364 29000 3.2808
3.3031 0.7618 30000 3.2878
3.3047 0.7872 31000 3.2691
3.1521 0.8126 32000 3.2749
3.2836 0.8380 33000 3.2561
3.2872 0.8634 34000 3.2511
3.1762 0.8888 35000 3.2519
3.2412 0.9142 36000 3.2455
3.2428 0.9396 37000 3.2323
3.2216 0.9650 38000 3.2419
3.2271 0.9904 39000 3.2220

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
34
Safetensors
Model size
1.86M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fpadovani/random_transcript_conv

Finetuned
(1326)
this model