MATH_training_response_Qwen2.5_14B_only_right

This model is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct on the MATH_training_response_Qwen2.5_14B_only_right dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0381

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 4
  • total_eval_batch_size: 4
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss
0.0514 0.1404 200 0.0501
0.0505 0.2807 400 0.0442
0.0339 0.4211 600 0.0449
0.0418 0.5614 800 0.0442
0.0518 0.7018 1000 0.0431
0.0376 0.8421 1200 0.0409
0.0311 0.9825 1400 0.0377
0.0251 1.1228 1600 0.0393
0.0386 1.2632 1800 0.0397
0.034 1.4035 2000 0.0392
0.023 1.5439 2200 0.0391
0.0204 1.6842 2400 0.0382
0.0079 1.8246 2600 0.0381
0.0192 1.9649 2800 0.0378

Framework versions

  • PEFT 0.12.0
  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for cutelemonlili/Qwen2.5-14B-Instruct_lora_MATH_training_response_Qwen2.5_14B_only_right

Base model

Qwen/Qwen2.5-14B
Adapter
(99)
this model