UTI2_M2_1000steps_1e6rate_03beta_CSFTDPO
This model is a fine-tuned version of tsavage68/UTI_M2_1000steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5476
- Rewards/chosen: 0.6814
- Rewards/rejected: -4.6544
- Rewards/accuracies: 0.2100
- Rewards/margins: 5.3358
- Logps/rejected: -24.8887
- Logps/chosen: -2.2712
- Logits/rejected: -2.9633
- Logits/chosen: -2.9624
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.57 | 0.3333 | 25 | 0.5476 | 0.1996 | -1.9739 | 0.2100 | 2.1735 | -15.9537 | -3.8773 | -2.6278 | -2.6270 |
0.5545 | 0.6667 | 50 | 0.8328 | -0.3043 | -2.4867 | 0.1900 | 2.1824 | -17.6630 | -5.5567 | -2.6282 | -2.6278 |
0.5718 | 1.0 | 75 | 0.6837 | -0.0010 | -4.4519 | 0.2000 | 4.4509 | -24.2136 | -4.5457 | -2.5876 | -2.5866 |
1.5715 | 1.3333 | 100 | 0.5532 | 0.1299 | -1.4214 | 0.2100 | 1.5514 | -14.1121 | -4.1094 | -2.7051 | -2.7007 |
0.5545 | 1.6667 | 125 | 0.5476 | -0.3852 | -5.4539 | 0.2100 | 5.0687 | -27.5536 | -5.8266 | -2.8901 | -2.8865 |
0.5545 | 2.0 | 150 | 0.5476 | 0.2611 | -4.3525 | 0.2100 | 4.6136 | -23.8823 | -3.6721 | -2.7385 | -2.7355 |
0.5545 | 2.3333 | 175 | 1.0027 | 0.5350 | -2.3392 | 0.1900 | 2.8741 | -17.1712 | -2.7592 | -2.9130 | -2.9119 |
0.4868 | 2.6667 | 200 | 0.5476 | 0.7465 | -4.5835 | 0.2100 | 5.3300 | -24.6522 | -2.0542 | -2.9266 | -2.9263 |
0.6412 | 3.0 | 225 | 0.5476 | 0.7465 | -4.5835 | 0.2100 | 5.3300 | -24.6522 | -2.0542 | -2.9266 | -2.9263 |
0.5545 | 3.3333 | 250 | 0.5476 | 0.7465 | -4.5835 | 0.2100 | 5.3300 | -24.6522 | -2.0542 | -2.9266 | -2.9263 |
0.5372 | 3.6667 | 275 | 0.5476 | 0.7465 | -4.5835 | 0.2100 | 5.3300 | -24.6522 | -2.0542 | -2.9266 | -2.9263 |
1.1074 | 4.0 | 300 | 0.5766 | 0.6299 | -3.3690 | 0.2000 | 3.9989 | -20.6038 | -2.4428 | -2.9646 | -2.9642 |
0.4679 | 4.3333 | 325 | 0.5476 | 0.6791 | -4.2066 | 0.2100 | 4.8858 | -23.3961 | -2.2788 | -2.9606 | -2.9595 |
0.5718 | 4.6667 | 350 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 5.0 | 375 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 5.3333 | 400 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 5.6667 | 425 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 6.0 | 450 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5718 | 6.3333 | 475 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5718 | 6.6667 | 500 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 7.0 | 525 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 7.3333 | 550 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5372 | 7.6667 | 575 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5718 | 8.0 | 600 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5372 | 8.3333 | 625 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.4332 | 8.6667 | 650 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5372 | 9.0 | 675 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 9.3333 | 700 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 9.6667 | 725 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5892 | 10.0 | 750 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 10.3333 | 775 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 10.6667 | 800 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5372 | 11.0 | 825 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5199 | 11.3333 | 850 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.6065 | 11.6667 | 875 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5718 | 12.0 | 900 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.4159 | 12.3333 | 925 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.6238 | 12.6667 | 950 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.6065 | 13.0 | 975 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
0.5025 | 13.3333 | 1000 | 0.5476 | 0.6814 | -4.6544 | 0.2100 | 5.3358 | -24.8887 | -2.2712 | -2.9633 | -2.9624 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tsavage68/UTI2_M2_1000steps_1e6rate_03beta_CSFTDPO
Base model
mistralai/Mistral-7B-Instruct-v0.2
Finetuned
tsavage68/UTI_M2_1000steps_1e7rate_SFT