AA_preference_random_0_90
This model is a fine-tuned version of llava-hf/llava-v1.6-mistral-7b-hf on the AA_preference_random_0_90 dataset. It achieves the following results on the evaluation set:
- Loss: 0.5266
- Rewards/chosen: 0.6411
- Rewards/rejected: -1.9030
- Rewards/accuracies: 0.7986
- Rewards/margins: 2.5441
- Logps/rejected: -230.4333
- Logps/chosen: -239.3183
- Logits/rejected: -2.0706
- Logits/chosen: -2.1025
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.635 | 0.4158 | 50 | 0.5987 | 0.8451 | -0.0388 | 0.7014 | 0.8840 | -211.7915 | -237.2780 | -2.3908 | -2.3899 |
0.4933 | 0.8316 | 100 | 0.5285 | -0.2263 | -1.8151 | 0.7523 | 1.5888 | -229.5545 | -247.9923 | -1.9128 | -1.9530 |
0.2495 | 1.2474 | 150 | 0.5427 | 0.5572 | -1.4201 | 0.7593 | 1.9773 | -225.6041 | -240.1570 | -2.0983 | -2.1232 |
0.2753 | 1.6632 | 200 | 0.5260 | 0.5776 | -1.6735 | 0.7870 | 2.2511 | -228.1382 | -239.9529 | -1.9752 | -2.0068 |
0.1584 | 2.0790 | 250 | 0.5118 | 0.5255 | -1.9057 | 0.7940 | 2.4312 | -230.4605 | -240.4746 | -2.0354 | -2.0689 |
0.1572 | 2.4948 | 300 | 0.5261 | 0.7582 | -1.7260 | 0.7986 | 2.4842 | -228.6629 | -238.1469 | -2.0616 | -2.0941 |
0.1557 | 2.9106 | 350 | 0.5265 | 0.6414 | -1.9061 | 0.7986 | 2.5475 | -230.4645 | -239.3154 | -2.0706 | -2.1026 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.3
- Downloads last month
- 6
Inference API (serverless) does not yet support transformers models for this pipeline type.
Model tree for htlou/mm-interp-AA_preference_random_0_90
Base model
llava-hf/llava-v1.6-mistral-7b-hf