Better than InternVL 2.0
OpenGVLab
community
AI & ML interests
Computer Vision
Recent Activity
Organization Card
OpenGVLab
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.
Models
- InternVL: a pioneering open-source alternative to GPT-4V.
- InternImage: a large-scale vision foundation models with deformable convolutions.
- InternVideo: large-scale video foundation models for multimodal understanding.
- VideoChat: an end-to-end chat assistant for video comprehension.
- All-Seeing-Project: towards panoptic visual recognition and understanding of the open world.
Datasets
- ShareGPT4o: a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
- InternVid: a large-scale video-text dataset for multimodal understanding and generation.
- MMPR: a high-quality, large-scale multimodal preference dataset.
Benchmarks
- MVBench: a comprehensive benchmark for multimodal video understanding.
- CRPE: a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability.
- MM-NIAH: a comprehensive benchmark for long multimodal documents comprehension.
- GMAI-MMBench: a comprehensive multimodal evaluation benchmark towards general medical AI.
Collections
17
Enhancing the Reasoning Ability of MLLMs via Mixed Preference Optimization
-
Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization
Paper • 2411.10442 • Published • 66 -
OpenGVLab/InternVL2-8B-MPO
Image-Text-to-Text • Updated • 3.18k • 29 -
OpenGVLab/MMPR
Preview • Updated • 218 • 40 -
OpenGVLab/InternVL2_5-1B-MPO
Image-Text-to-Text • Updated • 58 • 12
spaces
10
models
131
OpenGVLab/HoVLE
Image-Text-to-Text
•
Updated
•
4
•
4
OpenGVLab/HoVLE-HD
Image-Text-to-Text
•
Updated
•
4
•
2
OpenGVLab/InternVL2_5-38B-MPO-AWQ
Image-Text-to-Text
•
Updated
OpenGVLab/InternVL2_5-26B-MPO-AWQ
Image-Text-to-Text
•
Updated
OpenGVLab/InternVL2_5-8B-MPO-AWQ
Image-Text-to-Text
•
Updated
OpenGVLab/InternVL2_5-4B-MPO-AWQ
Image-Text-to-Text
•
Updated
OpenGVLab/InternVL2_5-78B-MPO
Image-Text-to-Text
•
Updated
•
62
•
13
OpenGVLab/InternVL2_5-38B-MPO
Image-Text-to-Text
•
Updated
•
25
•
12
OpenGVLab/InternVL2_5-26B-MPO
Image-Text-to-Text
•
Updated
•
87
•
9
OpenGVLab/InternVL2_5-8B-MPO
Image-Text-to-Text
•
Updated
•
112
•
11
datasets
30
OpenGVLab/MMPR-v1.1
Preview
•
Updated
•
39
•
13
OpenGVLab/MMPR
Preview
•
Updated
•
218
•
40
OpenGVLab/GMAI-MMBench
Preview
•
Updated
•
322
•
14
OpenGVLab/V2PE-Data
Preview
•
Updated
•
329
•
4
OpenGVLab/InternVL-Domain-Adaptation-Data
Preview
•
Updated
•
162
•
5
OpenGVLab/GUI-Odyssey
Viewer
•
Updated
•
7.74k
•
12.8k
•
9
OpenGVLab/OmniCorpus-YT
Updated
•
147
•
8
OpenGVLab/OmniCorpus-CC-210M
Viewer
•
Updated
•
208M
•
765
•
19
OpenGVLab/OmniCorpus-CC
Viewer
•
Updated
•
986M
•
23.3k
•
12
OpenGVLab/MVBench
Viewer
•
Updated
•
4k
•
7.11k
•
28