Datasets:
language:
- en
license: cc-by-nc-4.0
size_categories:
- 10M<n<100M
task_categories:
- question-answering
dataset_info:
features:
- name: data_path
sequence: string
- name: generator
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
- name: metadata
dtype: string
splits:
- name: dcs_sa
num_bytes: 1201564580
num_examples: 2310331
- name: dcs_mc
num_bytes: 1323422399
num_examples: 2310331
- name: dcm_sa_2_img
num_bytes: 858391411
num_examples: 1400000
- name: dcm_mc_2_img
num_bytes: 931146637
num_examples: 1400000
- name: dcm_sa_3_img
num_bytes: 1168447812
num_examples: 1400000
- name: dcm_mc_3_img
num_bytes: 1298813542
num_examples: 1400000
- name: dcm_sa_4_img
num_bytes: 1436354373
num_examples: 1400000
- name: dcm_mc_4_img
num_bytes: 1598496962
num_examples: 1400000
- name: vgs_sa
num_bytes: 595577425
num_examples: 1537630
- name: vgs_mc
num_bytes: 671343503
num_examples: 1537630
- name: vgm_sa_2_img
num_bytes: 536078137
num_examples: 1400000
- name: vgm_mc_2_img
num_bytes: 612895409
num_examples: 1400000
- name: vgm_sa_3_img
num_bytes: 693450488
num_examples: 1400000
- name: vgm_mc_3_img
num_bytes: 830159021
num_examples: 1400000
- name: vgm_sa_4_img
num_bytes: 802710456
num_examples: 1400000
- name: vgm_mc_4_img
num_bytes: 972149375
num_examples: 1400000
download_size: 5914822167
dataset_size: 15531001530
configs:
- config_name: default
data_files:
- split: dcs_sa
path: data/dcs_sa-*
- split: dcs_mc
path: data/dcs_mc-*
- split: dcm_sa_2_img
path: data/dcm_sa_2_img-*
- split: dcm_mc_2_img
path: data/dcm_mc_2_img-*
- split: dcm_sa_3_img
path: data/dcm_sa_3_img-*
- split: dcm_mc_3_img
path: data/dcm_mc_3_img-*
- split: dcm_sa_4_img
path: data/dcm_sa_4_img-*
- split: dcm_mc_4_img
path: data/dcm_mc_4_img-*
- split: vgs_sa
path: data/vgs_sa-*
- split: vgs_mc
path: data/vgs_mc-*
- split: vgm_sa_2_img
path: data/vgm_sa_2_img-*
- split: vgm_mc_2_img
path: data/vgm_mc_2_img-*
- split: vgm_sa_3_img
path: data/vgm_sa_3_img-*
- split: vgm_mc_3_img
path: data/vgm_mc_3_img-*
- split: vgm_sa_4_img
path: data/vgm_sa_4_img-*
- split: vgm_mc_4_img
path: data/vgm_mc_4_img-*
tags:
- multimodal
ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models
ProVision is an extendable data generation engine which produces instruction data for large multimodal language models (MLMs).
In particular, it synthesizes instruction data via data generators (Python programs) and scene graphs rather than proprietary models. It also includes a scene graph generation pipeline consisting of various state-of-the-art models (eg, object detection model). Thus, one can generate instruction data for any given image by first generating the scene graph and then apply data generators.
Provision supports generation of both single-image and multi-image instruction data. One can also extend the engine by adding new data generators.
You are currently viewing the ProVision-10M dataset.
Dataset Details
Dataset Sources
- Repository: https://github.com/JieyuZ2/ProVision
- Paper: https://arxiv.org/abs/2412.07012
- Blog:
- Source Data: Visual Genome/GQA and DataComp
Uses
Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This repository is being released for research purposes only.
Direct Use
ProVision-10M is designed to facilitate research in training multimodal language models.
Out-of-Scope Use
ProVision-10M was built to make research into large multimodal models more accessible. Using the dataset to train models that ingest or generate personally identifying information (such as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of ProVision-10M.
Dataset Creation
Curation Rationale
ProVision-10M was created to demonstrate the potential of programmatically synthesizing instruction data for training multimodal language models.
Source Data
The dataset is built upon two data sources:
- we use 74,289 images and scene graphs from Visual Genome(the GQA version)
- we use 126,106 images from DataComp
Dataset summary
We do not release the images, please download the images from their original sources (GQA/DataComp)
Split | Size | Format | Description |
---|---|---|---|
vgs_sa | 1537630 | short answer | single-image instruction data based on Visual Genome |
vgs_mc | 1537630 | multiple choice | single-image instruction data based on Visual Genome |
vgm_sa_2_img | 1400000 | short answer | 2-image instruction data based on Visual Genome |
vgm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on Visual Genome |
vgm_sa_3_img | 1400000 | short answer | 3-image instruction data based on Visual Genome |
vgm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on Visual Genome |
vgm_sa_4_img | 1400000 | short answer | 4-image instruction data based on Visual Genome |
vgm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on Visual Genome |
dcs_sa | 2294572 | short answer | single-image instruction data based on DataComp images |
dcs_mc | 2294572 | multiple choice | single-image instruction data based on DataComp images |
dcm_sa_2_img | 1400000 | short answer | 2-image instruction data based on DataComp images |
dcm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on DataComp images |
dcm_sa_3_img | 1400000 | short answer | 3-image instruction data based on DataComp images |
dcm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on DataComp images |
dcm_sa_4_img | 1400000 | short answer | 4-image instruction data based on DataComp images |
dcm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on DataComp images |
License
We release ProVision-10M under a CC-BY-NC-4.0 license.
Citation
@article{zhang2024provision,
title={ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models},
author={Zhang, Jieyu and Xue, Le and Song, Linxin and Wang, Jun and Huang, Weikai and Shu, Manli and Yan, An and Ma, Zixian and Niebles, Juan Carlos and Xiong, Caiming and others},
journal={arXiv preprint arXiv:2412.07012},
year={2024}
}