Weiyun1025 commited on
Commit
1d5671b
·
verified ·
1 Parent(s): e852082

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +15 -17
README.md CHANGED
@@ -31,7 +31,7 @@ We introduce InternVL3, an advanced multimodal large language model (MLLM) serie
31
  Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
32
  Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
33
 
34
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/fMRWSzB8ysrafQ2XJW9WR.png)
35
 
36
  ## InternVL3 Family
37
 
@@ -47,7 +47,7 @@ In the following table, we provide an overview of the InternVL3 series.
47
  | InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
48
  | InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
49
 
50
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/9wf54ERGoiM3-QICkj3Oc.png)
51
 
52
  ## Model Architecture
53
 
@@ -104,39 +104,39 @@ In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B]
104
 
105
  ### Multimodal Reasoning and Mathematics
106
 
107
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/uVIhB9BKlirAc6zlQYI0a.png)
108
 
109
  ### OCR, Chart, and Document Understanding
110
 
111
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/ivWJi3Rx_snJTfq--_9kD.png)
112
 
113
  ### Multi-Image & Real-World Comprehension
114
 
115
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/y1OnFvJxei4dd9ZzCZ6yo.png)
116
 
117
  ### Comprehensive Multimodal & Hallucination Evaluation
118
 
119
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/XeBACd5_k_1lBbT70c6rp.png)
120
 
121
  ### Visual Grounding
122
 
123
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/LFBrfb6amBxL_WKOqs5gr.png)
124
 
125
  ### Multimodal Multilingual Understanding
126
 
127
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/1wc3dA3KwIeMWOVja2GF0.png)
128
 
129
  ### Video Understanding
130
 
131
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/qU5-6VDYtFcXxSq2bNH7T.png)
132
 
133
  ### GUI Grounding
134
 
135
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/9Zjve-VEmxFw5V-BT-wRA.png)
136
 
137
  ### Spatial Reasoning
138
 
139
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/J4AmWoDuJ6JwILdtsNJcN.png)
140
 
141
  ## Evaluation on Language Capability
142
 
@@ -144,8 +144,7 @@ We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained
144
  Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
145
  Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
146
 
147
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/XHOC-GZO7qAtG8seR05ig.png)
148
-
149
 
150
  ## Ablation Study
151
 
@@ -155,20 +154,19 @@ We conduct experiments on the InternVL2-8B model while keeping its architecture,
155
 
156
  The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.
157
 
158
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/X4PC_JDyPGWJF6gYMPSH6.png)
159
 
160
  ### Mixed Preference Optimization
161
 
162
  As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
163
 
164
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/bZuOd8yy_sFHcrkYoRERG.png)
165
 
166
  ### Variable Visual Position Encoding
167
 
168
  As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
169
 
170
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/K6D0OP5ZiNLTJB51-HOWa.png)
171
-
172
 
173
  ## Quick Start
174
 
 
31
  Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
32
  Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
33
 
34
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/overall.png)
35
 
36
  ## InternVL3 Family
37
 
 
47
  | InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
48
  | InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
49
 
50
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/overall-table.png)
51
 
52
  ## Model Architecture
53
 
 
104
 
105
  ### Multimodal Reasoning and Mathematics
106
 
107
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/reasoning.png)
108
 
109
  ### OCR, Chart, and Document Understanding
110
 
111
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/ocr.png)
112
 
113
  ### Multi-Image & Real-World Comprehension
114
 
115
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/multi-images.png)
116
 
117
  ### Comprehensive Multimodal & Hallucination Evaluation
118
 
119
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/comprehensive.png)
120
 
121
  ### Visual Grounding
122
 
123
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/grounding.png)
124
 
125
  ### Multimodal Multilingual Understanding
126
 
127
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/multilingual.png)
128
 
129
  ### Video Understanding
130
 
131
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/video.png)
132
 
133
  ### GUI Grounding
134
 
135
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/gui.png)
136
 
137
  ### Spatial Reasoning
138
 
139
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/vsi.png)
140
 
141
  ## Evaluation on Language Capability
142
 
 
144
  Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
145
  Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
146
 
147
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/text.png)
 
148
 
149
  ## Ablation Study
150
 
 
154
 
155
  The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.
156
 
157
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/ablation-native.png)
158
 
159
  ### Mixed Preference Optimization
160
 
161
  As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
162
 
163
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/ablation-mpo.png)
164
 
165
  ### Variable Visual Position Encoding
166
 
167
  As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
168
 
169
+ ![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/ablation-v2pe.png)
 
170
 
171
  ## Quick Start
172