age_sentence

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.0672

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
7.0408 0.0635 200 5.8463
5.2673 0.1270 400 5.0020
4.7671 0.1905 600 4.7375
4.559 0.2540 800 4.6046
4.485 0.3176 1000 4.4890
4.501 0.3811 1200 4.4029
4.4143 0.4446 1400 4.3288
4.3405 0.5081 1600 4.2652
4.2983 0.5716 1800 4.2153
4.2299 0.6351 2000 4.1774
4.2005 0.6986 2200 4.1477
4.1776 0.7621 2400 4.1182
4.1379 0.8257 2600 4.0924
4.1083 0.8892 2800 4.0728
4.1399 0.9527 3000 4.0672

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
198
Safetensors
Model size
2.65M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fpadovani/age_sentence

Finetuned
(1326)
this model
Quantizations
1 model