UTI2_M2_750steps_1e5rate_05beta_CSFTDPO
This model is a fine-tuned version of tsavage68/UTI_M2_1000steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.8757
- Rewards/chosen: -6.0873
- Rewards/rejected: -4.3997
- Rewards/accuracies: 0.0700
- Rewards/margins: -1.6876
- Logps/rejected: -18.1734
- Logps/chosen: -16.7171
- Logits/rejected: -2.6754
- Logits/chosen: -2.6755
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 750
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
2.0624 | 0.3333 | 25 | 2.5226 | -1.8449 | -0.4090 | 0.0700 | -1.4358 | -10.1920 | -8.2322 | -2.5575 | -2.5577 |
3.5044 | 0.6667 | 50 | 4.5890 | -5.8971 | -3.0050 | 0.0800 | -2.8921 | -15.3840 | -16.3367 | -1.8026 | -1.8027 |
3.7907 | 1.0 | 75 | 2.9383 | -6.0760 | -3.7525 | 0.0300 | -2.3235 | -16.8790 | -16.6946 | -3.0195 | -3.0194 |
2.0779 | 1.3333 | 100 | 3.1415 | -5.1412 | -2.7057 | 0.0400 | -2.4355 | -14.7854 | -14.8249 | -2.3865 | -2.3865 |
2.1307 | 1.6667 | 125 | 4.6202 | -7.0602 | -3.8899 | 0.0700 | -3.1703 | -17.1538 | -18.6629 | -2.7484 | -2.7482 |
2.7273 | 2.0 | 150 | 3.0326 | -5.1418 | -2.7562 | 0.0300 | -2.3856 | -14.8864 | -14.8261 | -2.0909 | -2.0909 |
1.5353 | 2.3333 | 175 | 3.1462 | -5.0136 | -2.5758 | 0.0400 | -2.4378 | -14.5256 | -14.5696 | 0.3331 | 0.3331 |
4.0743 | 2.6667 | 200 | 2.9971 | -4.9664 | -2.5983 | 0.0300 | -2.3681 | -14.5706 | -14.4753 | -0.0159 | -0.0159 |
0.9986 | 3.0 | 225 | 3.0856 | -4.9856 | -2.5747 | 0.0300 | -2.4109 | -14.5234 | -14.5137 | 0.3048 | 0.3048 |
1.6525 | 3.3333 | 250 | 3.1592 | -5.0282 | -2.5848 | 0.0400 | -2.4433 | -14.5436 | -14.5988 | 0.4805 | 0.4805 |
1.9899 | 3.6667 | 275 | 3.1493 | -5.0549 | -2.6157 | 0.0400 | -2.4392 | -14.6055 | -14.6523 | 0.4901 | 0.4901 |
1.4214 | 4.0 | 300 | 3.0891 | -5.0107 | -2.5984 | 0.0300 | -2.4123 | -14.5708 | -14.5639 | 0.2933 | 0.2933 |
2.4027 | 4.3333 | 325 | 3.1322 | -5.0026 | -2.5711 | 0.0400 | -2.4315 | -14.5162 | -14.5476 | 0.2898 | 0.2898 |
3.0815 | 4.6667 | 350 | 3.1138 | -5.0100 | -2.5863 | 0.0300 | -2.4236 | -14.5466 | -14.5624 | 0.2376 | 0.2375 |
3.9086 | 5.0 | 375 | 3.0901 | -5.0259 | -2.6131 | 0.0300 | -2.4128 | -14.6001 | -14.5942 | 0.2565 | 0.2565 |
1.9593 | 5.3333 | 400 | 3.1018 | -5.0272 | -2.6090 | 0.0300 | -2.4182 | -14.5919 | -14.5968 | 0.2684 | 0.2684 |
2.4838 | 5.6667 | 425 | 3.0355 | -4.9905 | -2.6034 | 0.0300 | -2.3871 | -14.5807 | -14.5234 | 0.1890 | 0.1890 |
3.7035 | 6.0 | 450 | 3.0602 | -5.0100 | -2.6110 | 0.0300 | -2.3990 | -14.5959 | -14.5624 | 0.2694 | 0.2694 |
1.9548 | 6.3333 | 475 | 3.1280 | -4.9982 | -2.5683 | 0.0300 | -2.4299 | -14.5106 | -14.5389 | 0.2830 | 0.2830 |
2.401 | 6.6667 | 500 | 3.0907 | -4.9860 | -2.5730 | 0.0300 | -2.4129 | -14.5200 | -14.5144 | 0.2371 | 0.2371 |
3.021 | 7.0 | 525 | 3.0511 | -4.8229 | -2.4291 | 0.0300 | -2.3938 | -14.2321 | -14.1882 | -0.3978 | -0.3978 |
3.0732 | 7.3333 | 550 | 3.0204 | -4.5795 | -2.2045 | 0.0300 | -2.3750 | -13.7831 | -13.7015 | -1.2656 | -1.2657 |
1.1973 | 7.6667 | 575 | 3.0684 | -4.5390 | -2.1704 | 0.0300 | -2.3686 | -13.7148 | -13.6205 | -1.1867 | -1.1870 |
2.8529 | 8.0 | 600 | 3.0303 | -4.5060 | -2.1298 | 0.0300 | -2.3762 | -13.6335 | -13.5544 | -1.7173 | -1.7173 |
2.8072 | 8.3333 | 625 | 3.0587 | -4.5594 | -2.1832 | 0.0300 | -2.3762 | -13.7404 | -13.6613 | -1.2299 | -1.2300 |
3.3903 | 8.6667 | 650 | 2.8325 | -5.9445 | -4.0748 | 0.0600 | -1.8697 | -17.5236 | -16.4315 | -2.8838 | -2.8840 |
2.174 | 9.0 | 675 | 3.1512 | -7.3312 | -5.4478 | 0.0700 | -1.8834 | -20.2696 | -19.2049 | -2.8619 | -2.8620 |
1.6297 | 9.3333 | 700 | 3.0978 | -6.3597 | -4.4688 | 0.0800 | -1.8909 | -18.3115 | -17.2618 | -2.6659 | -2.6658 |
2.3962 | 9.6667 | 725 | 2.8899 | -6.0876 | -4.3898 | 0.0700 | -1.6978 | -18.1535 | -16.7177 | -2.6732 | -2.6734 |
1.6021 | 10.0 | 750 | 2.8757 | -6.0873 | -4.3997 | 0.0700 | -1.6876 | -18.1734 | -16.7171 | -2.6754 | -2.6755 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tsavage68/UTI2_M2_750steps_1e5rate_05beta_CSFTDPO
Base model
mistralai/Mistral-7B-Instruct-v0.2
Finetuned
tsavage68/UTI_M2_1000steps_1e7rate_SFT