mtzig's picture
Model save
809f537 verified
|
raw
history blame
5.41 kB
metadata
library_name: peft
base_model: peiyi9979/math-shepherd-mistral-7b-prm
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: v3_mistral_balance1_base_lora
    results: []

v3_mistral_balance1_base_lora

This model is a fine-tuned version of peiyi9979/math-shepherd-mistral-7b-prm on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0134
  • Accuracy: 0.9975
  • Precision: 0.9643
  • Recall: 0.9474
  • F1: 0.9558

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 8569382
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
No log 0 0 0.3256 0.9369 0.1429 0.2456 0.1806
0.4498 0.0258 20 0.2868 0.9474 0.1449 0.1754 0.1587
0.242 0.0515 40 0.1434 0.9672 0.2 0.0526 0.0833
0.1628 0.0773 60 0.1080 0.9692 0.3810 0.1404 0.2051
0.1241 0.1031 80 0.0874 0.9707 0.475 0.3333 0.3918
0.0676 0.1289 100 0.0690 0.9692 0.4713 0.7193 0.5694
0.03 0.1546 120 0.0472 0.9821 0.6296 0.8947 0.7391
0.0109 0.1804 140 0.0341 0.9911 0.8305 0.8596 0.8448
0.043 0.2062 160 0.0337 0.9916 0.8333 0.8772 0.8547
0.0233 0.2320 180 0.0272 0.9926 0.8281 0.9298 0.8760
0.029 0.2577 200 0.0233 0.9921 0.8254 0.9123 0.8667
0.0138 0.2835 220 0.0210 0.9930 0.8413 0.9298 0.8833
0.0141 0.3093 240 0.0175 0.9955 0.9286 0.9123 0.9204
0.0037 0.3351 260 0.0170 0.9940 0.8814 0.9123 0.8966
0.0076 0.3608 280 0.0186 0.9955 0.9 0.9474 0.9231
0.0133 0.3866 300 0.0152 0.9975 0.9643 0.9474 0.9558
0.0084 0.4124 320 0.0164 0.9970 0.9474 0.9474 0.9474
0.0039 0.4381 340 0.0141 0.9975 0.9643 0.9474 0.9558
0.0102 0.4639 360 0.0138 0.9970 0.9474 0.9474 0.9474
0.0019 0.4897 380 0.0152 0.9965 0.9310 0.9474 0.9391
0.0007 0.5155 400 0.0145 0.9970 0.9636 0.9298 0.9464
0.0028 0.5412 420 0.0141 0.9965 0.9310 0.9474 0.9391
0.0035 0.5670 440 0.0147 0.9960 0.9623 0.8947 0.9273
0.0016 0.5928 460 0.0159 0.9965 0.9808 0.8947 0.9358
0.0262 0.6186 480 0.0141 0.9970 0.9474 0.9474 0.9474
0.0294 0.6443 500 0.0165 0.9970 0.9811 0.9123 0.9455
0.0054 0.6701 520 0.0145 0.9970 0.9474 0.9474 0.9474
0.0293 0.6959 540 0.0148 0.9970 0.9474 0.9474 0.9474
0.0133 0.7216 560 0.0137 0.9970 0.9474 0.9474 0.9474
0.0028 0.7474 580 0.0141 0.9980 0.9818 0.9474 0.9643
0.0012 0.7732 600 0.0142 0.9980 0.9818 0.9474 0.9643
0.0018 0.7990 620 0.0136 0.9975 0.9643 0.9474 0.9558
0.0164 0.8247 640 0.0140 0.9975 0.9643 0.9474 0.9558
0.0359 0.8505 660 0.0143 0.9980 0.9818 0.9474 0.9643
0.0038 0.8763 680 0.0137 0.9980 0.9818 0.9474 0.9643
0.011 0.9021 700 0.0134 0.9975 0.9643 0.9474 0.9558
0.0144 0.9278 720 0.0134 0.9975 0.9643 0.9474 0.9558
0.0284 0.9536 740 0.0134 0.9980 0.9818 0.9474 0.9643
0.0066 0.9794 760 0.0134 0.9975 0.9643 0.9474 0.9558

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3