reverse_transcript_conv

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2532

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
4.6781 0.0254 1000 4.4922
4.3077 0.0508 2000 4.1938
4.112 0.0762 3000 4.0437
4.0246 0.1016 4000 3.9360
3.9453 0.1270 5000 3.8491
3.8408 0.1524 6000 3.8078
3.8155 0.1778 7000 3.7247
3.7213 0.2032 8000 3.6968
3.7151 0.2286 9000 3.6513
3.7075 0.2540 10000 3.6007
3.5585 0.2794 11000 3.5847
3.6149 0.3047 12000 3.5467
3.5912 0.3301 13000 3.5183
3.4807 0.3555 14000 3.4998
3.5226 0.3809 15000 3.4750
3.498 0.4063 16000 3.4569
3.4416 0.4317 17000 3.4453
3.4828 0.4571 18000 3.4140
3.3674 0.4825 19000 3.4138
3.4523 0.5079 20000 3.3858
3.4875 0.5333 21000 3.3705
3.2789 0.5587 22000 3.3777
3.3742 0.5841 23000 3.3513
3.3978 0.6095 24000 3.3461
3.2839 0.6349 25000 3.3452
3.3467 0.6603 26000 3.3287
3.3192 0.6857 27000 3.3149
3.3158 0.7111 28000 3.3185
3.3437 0.7365 29000 3.2969
3.217 0.7619 30000 3.3135
3.2955 0.7873 31000 3.2879
3.3673 0.8127 32000 3.2781
3.166 0.8381 33000 3.2869
3.2655 0.8634 34000 3.2728
3.3123 0.8888 35000 3.2662
3.1935 0.9142 36000 3.2696
3.2581 0.9396 37000 3.2558
3.2193 0.9650 38000 3.2571
3.2243 0.9904 39000 3.2532

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
55
Safetensors
Model size
1.86M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fpadovani/reverse_transcript_conv

Finetuned
(1326)
this model