MATH_training_response_gpt-4o-mini_only_right

This model is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct on the MATH_training_response_gpt-4o-mini_only_right dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0833

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 4
  • total_eval_batch_size: 4
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss
0.1197 0.1455 200 0.1125
0.0971 0.2909 400 0.1020
0.0967 0.4364 600 0.0961
0.0991 0.5818 800 0.0912
0.0801 0.7273 1000 0.0895
0.1174 0.8727 1200 0.0874
0.067 1.0182 1400 0.0862
0.09 1.1636 1600 0.0863
0.0777 1.3091 1800 0.0850
0.051 1.4545 2000 0.0841
0.074 1.6 2200 0.0845
0.0667 1.7455 2400 0.0839
0.0456 1.8909 2600 0.0835

Framework versions

  • PEFT 0.12.0
  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for cutelemonlili/Qwen2.5-14B-Instruct_lora_MATH_training_response_gpt-4o-mini_only_right

Base model

Qwen/Qwen2.5-14B
Adapter
(72)
this model