Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- lmms-lab/LLaVA-OneVision-Data
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- zh
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
library_name: transformers
|
11 |
+
tags:
|
12 |
+
- multimodal
|
13 |
+
|
14 |
+
# LLaVA-OneVision
|
15 |
+
|
16 |
+
![banner](https://i.postimg.cc/pL17YtG4/WX20240508-220230-2x.png)
|
17 |
+
|
18 |
+
Play with the model on the [LLaVA OneVision Chat](https://llava-onevision.lmms-lab.com/).
|
19 |
+
|
20 |
+
## Table of Contents
|
21 |
+
|
22 |
+
1. [Model Summary](##model-summary)
|
23 |
+
2. [Use](##use)
|
24 |
+
3. [Limitations](##limitations)
|
25 |
+
4. [Training](##training)
|
26 |
+
5. [License](##license)
|
27 |
+
6. [Citation](##citation)
|
28 |
+
|
29 |
+
## Model Summary
|
30 |
+
|
31 |
+
The LLaVA-OneVision models are 0.5/7/72B parameter models trained on [LLaVA-OneVision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), based on Qwen2 language model with a context window of 32K tokens.
|
32 |
+
|
33 |
+
- **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
|
34 |
+
- **Project Website:** [llava-onevision.lmms-lab.com](llava-onevision.lmms-lab.com)
|
35 |
+
- **Paper:** [LLaVA-OneVision]()
|
36 |
+
- **Point of Contact:** [Bo Li](mailto:[email protected])
|
37 |
+
- **Languages:** English, Chinese
|
38 |
+
|
39 |
+
## Use
|
40 |
+
|
41 |
+
### Intended use
|
42 |
+
|
43 |
+
The model was trained on [LLaVA-OneVision Dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) and have the ability to interact with images, multi-image and videos.
|
44 |
+
|
45 |
+
**Feel free to share your generations in the Community tab!**
|
46 |
+
|
47 |
+
### Generation
|
48 |
+
|
49 |
+
```python
|
50 |
+
# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
|
51 |
+
from llava.model.builder import load_pretrained_model
|
52 |
+
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
|
53 |
+
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
|
54 |
+
from llava.conversation import conv_templates, SeparatorStyle
|
55 |
+
|
56 |
+
from PIL import Image
|
57 |
+
import requests
|
58 |
+
import copy
|
59 |
+
import torch
|
60 |
+
|
61 |
+
import sys
|
62 |
+
import warnings
|
63 |
+
|
64 |
+
warnings.filterwarnings("ignore")
|
65 |
+
pretrained = "lmms-lab/llava-onevision-qwen2-0.5b-si"
|
66 |
+
model_name = "llava_qwen"
|
67 |
+
device = "cuda"
|
68 |
+
device_map = "auto"
|
69 |
+
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args
|
70 |
+
|
71 |
+
model.eval()
|
72 |
+
|
73 |
+
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
|
74 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
75 |
+
image_tensor = process_images([image], image_processor, model.config)
|
76 |
+
image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
|
77 |
+
|
78 |
+
conv_template = "qwen_1_5" # Make sure you use correct chat template for different models
|
79 |
+
question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
|
80 |
+
conv = copy.deepcopy(conv_templates[conv_template])
|
81 |
+
conv.append_message(conv.roles[0], question)
|
82 |
+
conv.append_message(conv.roles[1], None)
|
83 |
+
prompt_question = conv.get_prompt()
|
84 |
+
|
85 |
+
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
|
86 |
+
image_sizes = [image.size]
|
87 |
+
|
88 |
+
|
89 |
+
cont = model.generate(
|
90 |
+
input_ids,
|
91 |
+
images=image_tensor,
|
92 |
+
image_sizes=image_sizes,
|
93 |
+
do_sample=False,
|
94 |
+
temperature=0,
|
95 |
+
max_new_tokens=4096,
|
96 |
+
)
|
97 |
+
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
|
98 |
+
print(text_outputs)
|
99 |
+
```
|
100 |
+
|
101 |
+
# Training
|
102 |
+
|
103 |
+
## Model
|
104 |
+
|
105 |
+
- **Architecture:** SO400M + Qwen2
|
106 |
+
- **Pretraining Stage:** LCS-558K, 1 epoch, projector
|
107 |
+
- **Mid Stage:** A mixture of 4.7M high-quality synthetic data, 1 epoch, full model
|
108 |
+
- **Final-Image Stage:** A mixture of 3.6M single-image data, 1 epoch, full model
|
109 |
+
- **OneVision Stage:** A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model
|
110 |
+
- **Precision:** bfloat16
|
111 |
+
|
112 |
+
## Hardware & Software
|
113 |
+
|
114 |
+
- **GPUs:** 256 \* Nvidia Tesla A100 (for whole model series training)
|
115 |
+
- **Orchestration:** [Huggingface Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
|
116 |
+
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
|
117 |
+
|
118 |
+
# Citation
|
119 |
+
|
120 |
+
```
|
121 |
+
@article{li2024llavaonevision,
|
122 |
+
title={LLaVA-OneVision},
|
123 |
+
}
|
124 |
+
```
|