fede97 commited on
Commit
1d7b4c5
1 Parent(s): b2a37f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -3
README.md CHANGED
@@ -1,3 +1,35 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - liuhaotian/LLaVA-Instruct-150K
6
+ pipeline_tag: image-text-to-text
7
+ ---
8
+
9
+ # Model Card: LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning
10
+
11
+ ```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
12
+
13
+ In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B.
14
+
15
+ For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.
16
+
17
+ ## Inference
18
+ You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script.
19
+
20
+ ```bash
21
+ python -u llava/eval/run_llava.py --model-path "aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning"
22
+
23
+ ```
24
+
25
+ ## Citation
26
+ If you make use of our work, please cite our repo:
27
+
28
+ ```bibtex
29
+ @misc{cocchi2024llavamore,
30
+ title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
31
+ author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
32
+ url={https://github.com/aimagelab/LLaVA-MORE},
33
+ year={2024}
34
+ }
35
+ ```