sft-llava-1.5-7b_lora
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.9404
- Bleu: 0.1802
- Rouge1: 0.4861
- Rouge2: 0.1709
- Rougel: 0.3580
- Bertscore Precision: 0.6578
- Bertscore Recall: 0.7479
- Bertscore F1: 0.6999
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge1 | Rouge2 | Rougel | Bertscore Precision | Bertscore Recall | Bertscore F1 |
---|---|---|---|---|---|---|---|---|---|---|
5.7514 | 0.3101 | 200 | 5.6831 | 0.0772 | 0.2028 | 0.0717 | 0.1778 | 0.6381 | 0.7437 | 0.6869 |
2.9737 | 0.6202 | 400 | 2.9242 | 0.1580 | 0.4319 | 0.1445 | 0.3306 | 0.6578 | 0.7479 | 0.6999 |
2.6756 | 0.9302 | 600 | 2.6594 | 0.1839 | 0.4859 | 0.1759 | 0.3680 | 0.6381 | 0.7437 | 0.6869 |
2.18 | 1.2403 | 800 | 2.5783 | 0.1754 | 0.4864 | 0.1754 | 0.3775 | 0.6578 | 0.7479 | 0.6999 |
2.0957 | 1.5504 | 1000 | 2.5019 | 0.1849 | 0.4877 | 0.1850 | 0.3801 | 0.6578 | 0.7479 | 0.6999 |
2.0109 | 1.8605 | 1200 | 2.4393 | 0.1879 | 0.4911 | 0.1840 | 0.3859 | 0.6578 | 0.7479 | 0.6999 |
0.7656 | 2.1705 | 1400 | 2.9613 | 0.1808 | 0.4810 | 0.1719 | 0.3644 | 0.6578 | 0.7479 | 0.6999 |
0.7271 | 2.4806 | 1600 | 3.0544 | 0.1817 | 0.4795 | 0.1695 | 0.3629 | 0.6578 | 0.7479 | 0.6999 |
0.6746 | 2.7907 | 1800 | 3.0377 | 0.1754 | 0.4765 | 0.1639 | 0.3508 | 0.6578 | 0.7479 | 0.6999 |
0.1183 | 3.1008 | 2000 | 3.6408 | 0.1801 | 0.4821 | 0.1710 | 0.3636 | 0.6578 | 0.7479 | 0.6999 |
0.1123 | 3.4109 | 2200 | 3.6913 | 0.1765 | 0.4903 | 0.1712 | 0.3629 | 0.6578 | 0.7479 | 0.6999 |
0.1051 | 3.7209 | 2400 | 3.7181 | 0.1766 | 0.4884 | 0.1701 | 0.3618 | 0.6578 | 0.7479 | 0.6999 |
0.046 | 4.0310 | 2600 | 3.7719 | 0.1781 | 0.4849 | 0.1711 | 0.3598 | 0.6578 | 0.7479 | 0.6999 |
0.0444 | 4.3411 | 2800 | 3.9170 | 0.1801 | 0.4852 | 0.1719 | 0.3595 | 0.6578 | 0.7479 | 0.6999 |
0.0452 | 4.6512 | 3000 | 3.9377 | 0.1808 | 0.4872 | 0.1714 | 0.3604 | 0.6578 | 0.7479 | 0.6999 |
0.0449 | 4.9612 | 3200 | 3.9404 | 0.1802 | 0.4861 | 0.1709 | 0.3580 | 0.6578 | 0.7479 | 0.6999 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.2.0a0+81ea7a4
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 14
Inference API (serverless) does not yet support transformers models for this pipeline type.
Model tree for rohitsaxena/sft-llava-1.5-7b_lora
Base model
llava-hf/llava-1.5-7b-hf