Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ base_model:
|
|
9 |
- Qwen/Qwen2.5-1.5B
|
10 |
base_model_relation: merge
|
11 |
datasets:
|
12 |
-
- OpenGVLab/MMPR-v1.
|
13 |
language:
|
14 |
- multilingual
|
15 |
tags:
|
@@ -31,7 +31,7 @@ tags:
|
|
31 |
|
32 |
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
|
33 |
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
|
34 |
-
Additionally,
|
35 |
|
36 |

|
37 |
|
@@ -49,6 +49,8 @@ In the following table, we provide an overview of the InternVL3 series.
|
|
49 |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
|
50 |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
|
51 |
|
|
|
|
|
52 |
## Model Architecture
|
53 |
|
54 |
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.
|
@@ -104,35 +106,45 @@ In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B]
|
|
104 |
|
105 |
### Multimodal Reasoning and Mathematics
|
106 |
|
107 |
-

|
138 |
|
|
|
9 |
- Qwen/Qwen2.5-1.5B
|
10 |
base_model_relation: merge
|
11 |
datasets:
|
12 |
+
- OpenGVLab/MMPR-v1.2
|
13 |
language:
|
14 |
- multilingual
|
15 |
tags:
|
|
|
31 |
|
32 |
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
|
33 |
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
|
34 |
+
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
35 |
|
36 |

|
37 |
|
|
|
49 |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
|
50 |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
|
51 |
|
52 |
+

|
53 |
+
|
54 |
## Model Architecture
|
55 |
|
56 |
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.
|
|
|
106 |
|
107 |
### Multimodal Reasoning and Mathematics
|
108 |
|
109 |
+

|
110 |
|
111 |
### OCR, Chart, and Document Understanding
|
112 |
|
113 |
+

|
114 |
|
115 |
### Multi-Image & Real-World Comprehension
|
116 |
|
117 |
+

|
118 |
|
119 |
### Comprehensive Multimodal & Hallucination Evaluation
|
120 |
|
121 |
+

|
122 |
|
123 |
### Visual Grounding
|
124 |
|
125 |
+

|
126 |
|
127 |
### Multimodal Multilingual Understanding
|
128 |
|
129 |
+

|
130 |
|
131 |
### Video Understanding
|
132 |
|
133 |
+

|
134 |
+
|
135 |
+
### GUI Grounding
|
136 |
+
|
137 |
+

|
138 |
+
|
139 |
+
### Spatial Reasoning
|
140 |
+
|
141 |
+

|
142 |
|
143 |
## Evaluation on Language Capability
|
144 |
|
145 |
+
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
|
146 |
+
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
147 |
+
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
|
148 |
|
149 |

|
150 |
|