Datasets:

ArXiv:
ChocoWu's picture
Upload README.md with huggingface_hub
35b84c5 verified
|
raw
history blame
5.36 kB

We divide our benchmark into two settings: open and closed.

This is the open benchmark of Generalist-Bench, where we release the full ground-truth annotations for all datasets. It allows researchers to train and evaluate their models with access to the answers.

If you wish to thoroughly evaluate your model's performance, please use the πŸ‘‰ closed benchmark, which comes with detailed usage instructions.

Final results will be updated on the πŸ† Leaderboard.

✨✨✨ File Origanization Structure

Here is the organization structure of the file system:

General-Bench
β”œβ”€β”€ Image
β”‚   β”œβ”€β”€ comprehension
β”‚   β”‚   β”œβ”€β”€ Bird-Detection
β”‚   β”‚   β”‚   β”œβ”€β”€ annotation.json
β”‚   β”‚   β”‚   └── images
β”‚   β”‚   β”‚       └── Acadian_Flycatcher_0070_29150.jpg
β”‚   β”‚   β”œβ”€β”€ Bottle-Anomaly-Detection
β”‚   β”‚   β”‚   β”œβ”€β”€ annotation.json
β”‚   β”‚   β”‚   └── images
β”‚   β”‚   └── ...
β”‚   └── generation
β”‚       └── Layout-to-Face-Image-Generation
β”‚           β”œβ”€β”€ annotation.json
β”‚           └── images
β”‚           └── ...
β”œβ”€β”€ Video
β”‚   β”œβ”€β”€ comprehension
β”‚   β”‚   └── Human-Object-Interaction-Video-Captioning
β”‚   β”‚       β”œβ”€β”€ annotation.json
β”‚   β”‚       └── videos
β”‚   β”‚       └── ...
β”‚   └── generation
β”‚       └── Scene-Image-to-Video-Generation
β”‚           β”œβ”€β”€ annotation.json
β”‚           └── videos
β”‚           └── ...
β”œβ”€β”€ 3d
β”‚   β”œβ”€β”€ comprehension
β”‚   β”‚   └── 3D-Furniture-Classification
β”‚   β”‚       β”œβ”€β”€ annotation.json
β”‚   β”‚       └── pointclouds
β”‚   β”‚       └── ...
β”‚   └── generation
β”‚       └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
β”‚           β”œβ”€β”€ annotation.json
β”‚           └── pointclouds
β”‚           └── ...
β”œβ”€β”€ Audio
β”‚   β”œβ”€β”€ comprehension
β”‚   β”‚   └── Accent-Classification
β”‚   β”‚       β”œβ”€β”€ annotation.json
β”‚   β”‚       └── audios
β”‚   β”‚       └── ...
β”‚   └── generation
β”‚       └── Video-To-Audio
β”‚           β”œβ”€β”€ annotation.json
β”‚           └── audios
β”‚           └── ...
β”œβ”€β”€ NLP
β”‚   β”œβ”€β”€ History-Question-Answering
β”‚   β”‚   └── annotation.json
β”‚   β”œβ”€β”€ Abstractive-Summarization
β”‚   β”‚   └── annotation.json
β”‚   └── ...

An illustrative example of file formats:

image/png

Usage

Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.

xxxx


🌐🌐🌐 General-Bench

A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.

Figure: Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under comprehension and generation categories in various modalities

Figure: Distribution of various capabilities evaluated in General-Bench.

Figure: Distribution of various domains and disciplines covered by General-Bench.


🚩 Citation

If you find our benchmark useful in your research, please kindly consider citing us:

@article{generalist2025,
  title={On Path to Multimodal Generalist: Levels and Benchmarks},
  author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
  journal={arXiv},
  year={2025}
}