File size: 1,363 Bytes
9fab69b
 
 
1b6bbe8
9fab69b
 
538116d
 
9fab69b
 
 
 
d5a51d1
 
 
1b6bbe8
d5a51d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
task_categories:
- image-text-to-text
license: mit
---

## Introduction

This repository contains the data for [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595).

Project page: https://victorwz.github.io/Open-Qwen2VL

Code: https://github.com/Victorwz/Open-Qwen2VL

## Dataset
- ccs_ebdataset: CC3M-CC12M-SBU filtered by CLIP, we directly download the webdataset based on the [released of curated subset of BLIP-1](https://github.com/salesforce/BLIP)
- datacomp_medium_dfn_webdataset: DataComp-Medium-128M filtered by DFN, we just select this subset based the uids released by DFN
- datacomp_medium_mlm_filter_su_85_union_dfn_webdataset: DataComp-Medium-128M filtered by DFN union DataComp-Medium-128M filtered by MLM-Filter based on the semantic understanding metric with threshold 85



## Acknowledgement
This work was partially supported by the BioPACIFIC Materials Innovation Platform of the National Science Foundation under Award No. DMR-1933487


## Citation
```bibtex
@article{Open-Qwen2VL,
    title={Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources},
    author={Wang, Weizhi and Tian, Yu and Yang, Linjie and Wang, Heng and Yan, Xifeng},
    journal={arXiv preprint arXiv:2504.00595},
    year={2025}
  }
```