Update README.md
Browse files
README.md
CHANGED
@@ -1,220 +1,224 @@
|
|
1 |
-
|
2 |
-
<div align="center">
|
3 |
-
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
|
4 |
-
|
5 |
-
<h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
|
6 |
-
<p align="center">
|
7 |
-
<a href="https://generalist.top/">[π Project]</a>
|
8 |
-
<a href="https://level.generalist.top">[π Leaderboard]</a>
|
9 |
-
<a href="https://xxxxx">[π Paper]</a>
|
10 |
-
<a href="https://huggingface.co/General-Level">[π€ Dataset-HF]</a>
|
11 |
-
<a href="https://github.com/path2generalist/GeneralBench">[π Dataset-Github]</a>
|
12 |
-
</p>
|
13 |
-
|
14 |
-
|
15 |
-
</div>
|
16 |
-
|
17 |
-
|
18 |
-
---
|
19 |
-
We divide our benchmark into two settings: **`open`** and **`closed`**.
|
20 |
-
|
21 |
-
This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
|
22 |
-
It allows researchers to train and evaluate their models with access to the answers.
|
23 |
-
|
24 |
-
If you wish to thoroughly evaluate your model's performance, please use the
|
25 |
-
[π closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
|
26 |
-
|
27 |
-
Final results will be updated on the [π Leaderboard](https://level.generalist.top).
|
28 |
-
|
29 |
-
|
30 |
-
<!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
|
31 |
-
It allows researchers to train and evaluate their models with access to the answers.
|
32 |
-
|
33 |
-
If you wish to thoroughly evaluate your model's performance, please use the
|
34 |
-
[π closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
|
35 |
-
|
36 |
-
Final results will be updated on the [π Leaderboard](https://level.generalist.top). -->
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
---
|
44 |
-
|
45 |
-
## π Table of Contents
|
46 |
-
|
47 |
-
- [β¨ File Origanization Structure](#
|
48 |
-
- [π Usage](#usage)
|
49 |
-
- [π General-Bench](#bench)
|
50 |
-
- [π Capabilities and Domians Distribution](#distribution)
|
51 |
-
- [πΌοΈ Image Task Taxonomy](#οΈimageTaxonomy)
|
52 |
-
- [π½οΈ Video Task Taxonomy](#οΈvideoTaxonomy)
|
53 |
-
- [π Audio Task Taxonomy](#audioTaxonomy)
|
54 |
-
- [π 3D Task Taxonomy](#3dTaxonomy)
|
55 |
-
- [π Language Task Taxonomy](#languageTaxonomy)
|
56 |
-
|
57 |
-
---
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
β
|
67 |
-
β β
|
68 |
-
β β β
|
69 |
-
β β β
|
70 |
-
β β
|
71 |
-
β β
|
72 |
-
β β β
|
73 |
-
β β βββ
|
74 |
-
β βββ
|
75 |
-
β
|
76 |
-
β
|
77 |
-
β
|
78 |
-
β βββ
|
79 |
-
|
80 |
-
|
81 |
-
β
|
82 |
-
β β
|
83 |
-
β β
|
84 |
-
β β βββ
|
85 |
-
β βββ
|
86 |
-
β
|
87 |
-
β
|
88 |
-
β
|
89 |
-
β βββ
|
90 |
-
|
91 |
-
|
92 |
-
β
|
93 |
-
β β
|
94 |
-
β β
|
95 |
-
β β βββ
|
96 |
-
β βββ
|
97 |
-
β
|
98 |
-
β
|
99 |
-
β
|
100 |
-
β βββ
|
101 |
-
|
102 |
-
|
103 |
-
β
|
104 |
-
β β
|
105 |
-
β β
|
106 |
-
β β βββ
|
107 |
-
β βββ
|
108 |
-
β
|
109 |
-
β
|
110 |
-
β
|
111 |
-
β βββ
|
112 |
-
|
113 |
-
|
114 |
-
β
|
115 |
-
β
|
116 |
-
β
|
117 |
-
β βββ
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
<
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
<
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
<
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
|
205 |
-
|
206 |
-
|
207 |
-
|
208 |
-
|
209 |
-
|
210 |
-
|
211 |
-
|
212 |
-
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
|
218 |
-
|
219 |
-
}
|
|
|
|
|
|
|
|
|
220 |
```
|
|
|
1 |
+
|
2 |
+
<div align="center">
|
3 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
|
4 |
+
|
5 |
+
<h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
|
6 |
+
<p align="center">
|
7 |
+
<a href="https://generalist.top/">[π Project]</a>
|
8 |
+
<a href="https://level.generalist.top">[π Leaderboard]</a>
|
9 |
+
<a href="https://xxxxx">[π Paper]</a>
|
10 |
+
<a href="https://huggingface.co/General-Level">[π€ Dataset-HF]</a>
|
11 |
+
<a href="https://github.com/path2generalist/GeneralBench">[π Dataset-Github]</a>
|
12 |
+
</p>
|
13 |
+
|
14 |
+
|
15 |
+
</div>
|
16 |
+
|
17 |
+
|
18 |
+
---
|
19 |
+
We divide our benchmark into two settings: **`open`** and **`closed`**.
|
20 |
+
|
21 |
+
This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
|
22 |
+
It allows researchers to train and evaluate their models with access to the answers.
|
23 |
+
|
24 |
+
If you wish to thoroughly evaluate your model's performance, please use the
|
25 |
+
[π closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
|
26 |
+
|
27 |
+
Final results will be updated on the [π Leaderboard](https://level.generalist.top).
|
28 |
+
|
29 |
+
|
30 |
+
<!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
|
31 |
+
It allows researchers to train and evaluate their models with access to the answers.
|
32 |
+
|
33 |
+
If you wish to thoroughly evaluate your model's performance, please use the
|
34 |
+
[π closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
|
35 |
+
|
36 |
+
Final results will be updated on the [π Leaderboard](https://level.generalist.top). -->
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
## π Table of Contents
|
46 |
+
|
47 |
+
- [β¨ File Origanization Structure](#filestructure)
|
48 |
+
- [π Usage](#usage)
|
49 |
+
- [π General-Bench](#bench)
|
50 |
+
- [π Capabilities and Domians Distribution](#distribution)
|
51 |
+
- [πΌοΈ Image Task Taxonomy](#οΈimageTaxonomy)
|
52 |
+
- [π½οΈ Video Task Taxonomy](#οΈvideoTaxonomy)
|
53 |
+
- [π Audio Task Taxonomy](#audioTaxonomy)
|
54 |
+
- [π 3D Task Taxonomy](#3dTaxonomy)
|
55 |
+
- [π Language Task Taxonomy](#languageTaxonomy)
|
56 |
+
|
57 |
+
---
|
58 |
+
|
59 |
+
# β¨β¨β¨ **File Origanization Structure**<a name="filestructure" />
|
60 |
+
|
61 |
+
Here is the organization structure of the file system:
|
62 |
+
|
63 |
+
```
|
64 |
+
General-Bench
|
65 |
+
βββ Image
|
66 |
+
β βββ comprehension
|
67 |
+
β β βββ Bird-Detection
|
68 |
+
β β β βββ annotation.json
|
69 |
+
β β β βββ images
|
70 |
+
β β β βββ Acadian_Flycatcher_0070_29150.jpg
|
71 |
+
β β βββ Bottle-Anomaly-Detection
|
72 |
+
β β β βββ annotation.json
|
73 |
+
β β β βββ images
|
74 |
+
β β βββ ...
|
75 |
+
β βββ generation
|
76 |
+
β βββ Layout-to-Face-Image-Generation
|
77 |
+
β βββ annotation.json
|
78 |
+
β βββ images
|
79 |
+
β βββ ...
|
80 |
+
βββ Video
|
81 |
+
β βββ comprehension
|
82 |
+
β β βββ Human-Object-Interaction-Video-Captioning
|
83 |
+
β β βββ annotation.json
|
84 |
+
β β βββ videos
|
85 |
+
β β βββ ...
|
86 |
+
β βββ generation
|
87 |
+
β βββ Scene-Image-to-Video-Generation
|
88 |
+
β βββ annotation.json
|
89 |
+
β βββ videos
|
90 |
+
β βββ ...
|
91 |
+
βββ 3d
|
92 |
+
β βββ comprehension
|
93 |
+
β β βββ 3D-Furniture-Classification
|
94 |
+
β β βββ annotation.json
|
95 |
+
β β βββ pointclouds
|
96 |
+
β β βββ ...
|
97 |
+
β βββ generation
|
98 |
+
β βββ Text-to-3D-Living-and-Arts-Point-Cloud-Generation
|
99 |
+
β βββ annotation.json
|
100 |
+
β βββ pointclouds
|
101 |
+
β βββ ...
|
102 |
+
βββ Audio
|
103 |
+
β βββ comprehension
|
104 |
+
β β βββ Accent-Classification
|
105 |
+
β β βββ annotation.json
|
106 |
+
β β βββ audios
|
107 |
+
β β βββ ...
|
108 |
+
β βββ generation
|
109 |
+
β βββ Video-To-Audio
|
110 |
+
β βββ annotation.json
|
111 |
+
β βββ audios
|
112 |
+
β βββ ...
|
113 |
+
βββ NLP
|
114 |
+
β βββ History-Question-Answering
|
115 |
+
β β βββ annotation.json
|
116 |
+
β βββ Abstractive-Summarization
|
117 |
+
β β βββ annotation.json
|
118 |
+
β βββ ...
|
119 |
+
|
120 |
+
```
|
121 |
+
|
122 |
+
|
123 |
+
An illustrative example of file formats:
|
124 |
+
|
125 |
+
|
126 |
+

|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
# πππ Usage<a name="Usage" />
|
131 |
+
|
132 |
+
Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
|
133 |
+
|
134 |
+
xxxx
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
---
|
139 |
+
|
140 |
+
# πππ **General-Bench**<a name="bench" />
|
141 |
+
|
142 |
+
A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
|
143 |
+
|
144 |
+
<div align="center">
|
145 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
|
146 |
+
<p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
|
147 |
+
comprehension and generation categories in various modalities</p>
|
148 |
+
</div>
|
149 |
+
|
150 |
+
### πππ Capabilities and Domians Distribution<a name="distribution" />
|
151 |
+
|
152 |
+
<div align="center">
|
153 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
|
154 |
+
<p> Distribution of various capabilities evaluated in General-Bench.</p>
|
155 |
+
</div>
|
156 |
+
|
157 |
+
|
158 |
+
<div align="center">
|
159 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
|
160 |
+
<p>Distribution of various domains and disciplines covered by General-Bench.</p>
|
161 |
+
</div>
|
162 |
+
|
163 |
+
|
164 |
+
## πΌοΈ Image Task Taxonomy<a name="imageTaxonomy" />
|
165 |
+
<div align="center">
|
166 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
|
167 |
+
<p>Taxonomy and hierarchy of data in terms of Image modality.</p>
|
168 |
+
</div>
|
169 |
+
|
170 |
+
|
171 |
+
## π½οΈ Video Task Taxonomy<a name="videoTaxonomy" />
|
172 |
+
|
173 |
+
<div align="center">
|
174 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
|
175 |
+
<p>Taxonomy and hierarchy of data in terms of Video modality.</p>
|
176 |
+
</div>
|
177 |
+
|
178 |
+
|
179 |
+
|
180 |
+
## π Audio Task Taxonomy<a name="audioTaxonomy" />
|
181 |
+
|
182 |
+
|
183 |
+
|
184 |
+
<div align="center">
|
185 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
|
186 |
+
<p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
|
187 |
+
</div>
|
188 |
+
|
189 |
+
## π 3D Task Taxonomy<a name="3dTaxonomy" />
|
190 |
+
|
191 |
+
|
192 |
+
<div align="center">
|
193 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
|
194 |
+
<p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
|
195 |
+
</div>
|
196 |
+
|
197 |
+
|
198 |
+
## π Language Task Taxonomy<a name="languageTaxonomy" />
|
199 |
+
|
200 |
+
<div align="center">
|
201 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
|
202 |
+
<p>Taxonomy and hierarchy of data in terms of Language modality.</p>
|
203 |
+
</div>
|
204 |
+
|
205 |
+
|
206 |
+
|
207 |
+
|
208 |
+
---
|
209 |
+
|
210 |
+
|
211 |
+
|
212 |
+
|
213 |
+
## π© **Citation**
|
214 |
+
|
215 |
+
If you find our benchmark useful in your research, please kindly consider citing us:
|
216 |
+
|
217 |
+
```
|
218 |
+
@article{generalist2025,
|
219 |
+
title={On Path to Multimodal Generalist: Levels and Benchmarks},
|
220 |
+
author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
|
221 |
+
journal={arXiv},
|
222 |
+
year={2025}
|
223 |
+
}
|
224 |
```
|