File size: 1,278 Bytes
bf07b45
a780636
 
384f555
 
a780636
bf07b45
 
a780636
bf07b45
a780636
bf07b45
384f555
bf07b45
a780636
bf07b45
a780636
 
bf07b45
a780636
 
 
bf07b45
a780636
 
bf07b45
a780636
08351af
384f555
 
 
 
a780636
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
datasets:
- liuhaotian/LLaVA-Instruct-150K
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
---

# Model Card: LLaVA_MORE-llama_3_1-8B-finetuning

```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.

In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B, as described in [this paper](https://huggingface.co/papers/2503.15621).

For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.

## Inference
You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script.

```bash
python -u llava/eval/run_llava.py
```

## Citation
If you make use of our work, please cite our repo:

```bibtex
@inproceedings{cocchi2025llava,
      title={{LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning}},
      author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Baraldi, Lorenzo and Cornia, Marcella and Cucchiara, Rita},
      booktitle={arxiv},
      year={2025}
}
```