UTI2_M2_1000steps_1e7rate_01beta_CSFTDPO
This model is a fine-tuned version of tsavage68/UTI_M2_1000steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1464
- Rewards/chosen: 0.3665
- Rewards/rejected: -10.3164
- Rewards/accuracies: 0.8900
- Rewards/margins: 10.6829
- Logps/rejected: -142.5202
- Logps/chosen: -16.2565
- Logits/rejected: -2.4997
- Logits/chosen: -2.5003
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6904 | 0.3333 | 25 | 0.6532 | 0.0086 | -0.0741 | 0.8500 | 0.0827 | -40.0972 | -19.8359 | -2.6814 | -2.6788 |
0.4057 | 0.6667 | 50 | 0.3414 | 0.0792 | -0.9298 | 0.8800 | 1.0089 | -48.6537 | -19.1297 | -2.6601 | -2.6575 |
0.0995 | 1.0 | 75 | 0.1276 | 0.0336 | -4.2997 | 0.8800 | 4.3333 | -82.3535 | -19.5858 | -2.5678 | -2.5670 |
0.1747 | 1.3333 | 100 | 0.1735 | 0.3821 | -6.8905 | 0.8800 | 7.2726 | -108.2612 | -16.1009 | -2.4982 | -2.4982 |
0.0539 | 1.6667 | 125 | 0.1584 | 0.3819 | -8.0583 | 0.8800 | 8.4403 | -119.9394 | -16.1023 | -2.4903 | -2.4915 |
0.1214 | 2.0 | 150 | 0.1677 | 0.2553 | -8.8344 | 0.8800 | 9.0898 | -127.7004 | -17.3683 | -2.4883 | -2.4901 |
0.3016 | 2.3333 | 175 | 0.1533 | 0.3699 | -8.9219 | 0.8800 | 9.2918 | -128.5751 | -16.2228 | -2.4871 | -2.4884 |
0.0526 | 2.6667 | 200 | 0.1804 | 0.3731 | -9.1872 | 0.8800 | 9.5603 | -131.2281 | -16.1902 | -2.4868 | -2.4874 |
0.0347 | 3.0 | 225 | 0.1761 | 0.3899 | -9.3041 | 0.8700 | 9.6940 | -132.3968 | -16.0226 | -2.4868 | -2.4876 |
0.0347 | 3.3333 | 250 | 0.1726 | 0.3934 | -9.4015 | 0.8800 | 9.7949 | -133.3708 | -15.9876 | -2.4871 | -2.4881 |
0.2261 | 3.6667 | 275 | 0.1550 | 0.4175 | -9.3669 | 0.8800 | 9.7845 | -133.0253 | -15.7460 | -2.4852 | -2.4858 |
0.0867 | 4.0 | 300 | 0.1583 | 0.3970 | -9.5586 | 0.8800 | 9.9557 | -134.9424 | -15.9512 | -2.4917 | -2.4922 |
0.0867 | 4.3333 | 325 | 0.1524 | 0.3834 | -9.6529 | 0.8800 | 10.0364 | -135.8856 | -16.0871 | -2.4922 | -2.4927 |
0.0347 | 4.6667 | 350 | 0.1536 | 0.3887 | -9.7552 | 0.8800 | 10.1439 | -136.9079 | -16.0348 | -2.4909 | -2.4916 |
0.0693 | 5.0 | 375 | 0.1602 | 0.4007 | -9.8418 | 0.8800 | 10.2425 | -137.7743 | -15.9146 | -2.4931 | -2.4935 |
0.0693 | 5.3333 | 400 | 0.1628 | 0.3993 | -9.8672 | 0.8800 | 10.2665 | -138.0285 | -15.9287 | -2.4941 | -2.4944 |
0.052 | 5.6667 | 425 | 0.1563 | 0.3714 | -10.0405 | 0.8800 | 10.4119 | -139.7611 | -16.2072 | -2.4935 | -2.4944 |
0.0867 | 6.0 | 450 | 0.1575 | 0.3660 | -10.0923 | 0.8800 | 10.4583 | -140.2792 | -16.2615 | -2.4951 | -2.4959 |
0.0173 | 6.3333 | 475 | 0.1587 | 0.3680 | -10.1220 | 0.8800 | 10.4899 | -140.5760 | -16.2420 | -2.4957 | -2.4964 |
0.1386 | 6.6667 | 500 | 0.1472 | 0.3655 | -10.1866 | 0.8800 | 10.5521 | -141.2223 | -16.2666 | -2.4968 | -2.4976 |
0.0173 | 7.0 | 525 | 0.1535 | 0.3586 | -10.2252 | 0.8800 | 10.5838 | -141.6080 | -16.3355 | -2.4973 | -2.4982 |
0.0866 | 7.3333 | 550 | 0.1477 | 0.3652 | -10.2285 | 0.8800 | 10.5937 | -141.6415 | -16.2696 | -2.4973 | -2.4980 |
0.0347 | 7.6667 | 575 | 0.1505 | 0.3709 | -10.2496 | 0.8800 | 10.6205 | -141.8521 | -16.2123 | -2.4975 | -2.4982 |
0.0867 | 8.0 | 600 | 0.1523 | 0.3585 | -10.2741 | 0.8800 | 10.6326 | -142.0974 | -16.3365 | -2.4983 | -2.4990 |
0.104 | 8.3333 | 625 | 0.1527 | 0.3626 | -10.2951 | 0.8800 | 10.6578 | -142.3074 | -16.2951 | -2.4997 | -2.5004 |
0.0173 | 8.6667 | 650 | 0.1528 | 0.3582 | -10.3070 | 0.8800 | 10.6652 | -142.4260 | -16.3392 | -2.4996 | -2.5002 |
0.0693 | 9.0 | 675 | 0.1439 | 0.3656 | -10.3169 | 0.8900 | 10.6825 | -142.5250 | -16.2654 | -2.5006 | -2.5011 |
0.0693 | 9.3333 | 700 | 0.1481 | 0.3630 | -10.3099 | 0.8800 | 10.6729 | -142.4547 | -16.2912 | -2.5012 | -2.5017 |
0.0693 | 9.6667 | 725 | 0.1516 | 0.3579 | -10.3228 | 0.8800 | 10.6807 | -142.5841 | -16.3428 | -2.5002 | -2.5007 |
0.0867 | 10.0 | 750 | 0.1503 | 0.3607 | -10.3196 | 0.8800 | 10.6803 | -142.5521 | -16.3144 | -2.5003 | -2.5008 |
0.0866 | 10.3333 | 775 | 0.1518 | 0.3621 | -10.3166 | 0.8800 | 10.6787 | -142.5222 | -16.3006 | -2.5007 | -2.5013 |
0.0867 | 10.6667 | 800 | 0.1454 | 0.3657 | -10.3170 | 0.8900 | 10.6827 | -142.5259 | -16.2647 | -2.5001 | -2.5006 |
0.0693 | 11.0 | 825 | 0.1519 | 0.3644 | -10.3131 | 0.8800 | 10.6775 | -142.4875 | -16.2780 | -2.4996 | -2.5000 |
0.052 | 11.3333 | 850 | 0.1574 | 0.3597 | -10.3183 | 0.8800 | 10.6780 | -142.5394 | -16.3250 | -2.4999 | -2.5004 |
0.0693 | 11.6667 | 875 | 0.1462 | 0.3684 | -10.3191 | 0.8900 | 10.6875 | -142.5471 | -16.2377 | -2.4995 | -2.5001 |
0.0693 | 12.0 | 900 | 0.1497 | 0.3630 | -10.3167 | 0.8900 | 10.6796 | -142.5228 | -16.2920 | -2.4997 | -2.5003 |
0.0347 | 12.3333 | 925 | 0.1464 | 0.3663 | -10.3149 | 0.8900 | 10.6812 | -142.5056 | -16.2589 | -2.4997 | -2.5003 |
0.0693 | 12.6667 | 950 | 0.1464 | 0.3665 | -10.3164 | 0.8900 | 10.6829 | -142.5202 | -16.2565 | -2.4997 | -2.5003 |
0.052 | 13.0 | 975 | 0.1464 | 0.3665 | -10.3164 | 0.8900 | 10.6829 | -142.5202 | -16.2565 | -2.4997 | -2.5003 |
0.0693 | 13.3333 | 1000 | 0.1464 | 0.3665 | -10.3164 | 0.8900 | 10.6829 | -142.5202 | -16.2565 | -2.4997 | -2.5003 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for tsavage68/UTI2_M2_1000steps_1e7rate_01beta_CSFTDPO
Base model
mistralai/Mistral-7B-Instruct-v0.2
Finetuned
tsavage68/UTI_M2_1000steps_1e7rate_SFT