--- language: - en license: cc-by-nc-4.0 size_categories: - 10M ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models ProVision is an extendable data generation engine which produces instruction data for large multimodal language models (MLMs). In particular, it synthesizes instruction data via data generators (Python programs) and scene graphs rather than proprietary models. It also includes a scene graph generation pipeline consisting of various state-of-the-art models (eg, object detection model). Thus, one can generate instruction data for any given image by first generating the scene graph and then apply data generators. Provision supports generation of both single-image and multi-image instruction data. One can also extend the engine by adding new data generators. **You are currently viewing the ProVision-10M dataset.** ![pipeline](pipeline.png) ## Dataset Details ### Dataset Sources - **Repository**: https://github.com/JieyuZ2/ProVision - **Paper:** https://arxiv.org/abs/2412.07012 - **Blog:** - **Source Data:** [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html)/[GQA](https://cs.stanford.edu/people/dorarad/gqa/about.html) and [DataComp](https://www.datacomp.ai/dcclip/index.html#home) ## Uses Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This repository is being released for research purposes only. ### Direct Use ProVision-10M is designed to facilitate research in training multimodal language models. ### Out-of-Scope Use ProVision-10M was built to make research into large multimodal models more accessible. Using the dataset to train models that ingest or generate personally identifying information (such as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of ProVision-10M. ## Dataset Creation ### Curation Rationale ProVision-10M was created to demonstrate the potential of programmatically synthesizing instruction data for training multimodal language models. ### Source Data The dataset is built upon two data sources: - we use 74,289 images and scene graphs from Visual Genome(the GQA version) - we use 126,106 images from DataComp ### Dataset summary **We do not release the images, please download the images from their original sources (GQA/DataComp)** | Split | Size | Format | Description | | :------------| :------ | :------ | :---- | | vgs_sa | 1537630 | short answer | single-image instruction data based on Visual Genome | | vgs_mc | 1537630 | multiple choice | single-image instruction data based on Visual Genome | | vgm_sa_2_img | 1400000 | short answer | 2-image instruction data based on Visual Genome | | vgm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on Visual Genome | | vgm_sa_3_img | 1400000 | short answer | 3-image instruction data based on Visual Genome | | vgm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on Visual Genome | | vgm_sa_4_img | 1400000 | short answer | 4-image instruction data based on Visual Genome | | vgm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on Visual Genome | | dcs_sa | 2294572 | short answer | single-image instruction data based on DataComp images | | dcs_mc | 2294572 | multiple choice | single-image instruction data based on DataComp images | | dcm_sa_2_img | 1400000 | short answer | 2-image instruction data based on DataComp images | | dcm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on DataComp images | | dcm_sa_3_img | 1400000 | short answer | 3-image instruction data based on DataComp images | | dcm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on DataComp images | | dcm_sa_4_img | 1400000 | short answer | 4-image instruction data based on DataComp images | | dcm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on DataComp images | ## License We release ProVision-10M under a CC-BY-NC-4.0 license. ## Citation ``` @article{zhang2024provision, title={ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models}, author={Zhang, Jieyu and Xue, Le and Song, Linxin and Wang, Jun and Huang, Weikai and Shu, Manli and Yan, An and Ma, Zixian and Niebles, Juan Carlos and Xiong, Caiming and others}, journal={arXiv preprint arXiv:2412.07012}, year={2024} } ```