Papers
arxiv:2410.18558

Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data

Published on Oct 24
· Submitted by ldwang on Oct 28
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Vision-Language Models (VLMs) have recently made significant progress, but the limited scale and quality of open-source instruction data hinder their performance compared to closed-source models. In this work, we address this limitation by introducing Infinity-MM, a large-scale multimodal instruction dataset with 40 million samples, enhanced through rigorous quality filtering and deduplication. We also propose a synthetic instruction generation method based on open-source VLMs, using detailed image annotations and diverse question generation. Using this data, we trained a 2-billion-parameter VLM, Aquila-VL-2B, achieving state-of-the-art (SOTA) performance for models of similar scale. This demonstrates that expanding instruction data and generating synthetic data can significantly improve the performance of open-source models.

Community

Paper author Paper submitter

Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.18558 in a Space README.md to link it from this page.

Collections including this paper 3