Datasets:

ArXiv:
ChocoWu commited on
Commit
5e1bead
Β·
verified Β·
1 Parent(s): 4f5305e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +263 -221
README.md CHANGED
@@ -1,222 +1,264 @@
1
-
2
- <div align="center">
3
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
-
5
- <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
- <p align="center">
7
- <a href="https://generalist.top/">[πŸ“– Project]</a>
8
- <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
- <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
- <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
- <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
- </p>
13
-
14
-
15
- </div>
16
-
17
-
18
- ---
19
- We divide our benchmark into two settings: **`open`** and **`closed`**.
20
-
21
- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
- It allows researchers to train and evaluate their models with access to the answers.
23
-
24
- If you wish to thoroughly evaluate your model's performance, please use the
25
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
-
27
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
28
-
29
-
30
- <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
31
- It allows researchers to train and evaluate their models with access to the answers.
32
-
33
- If you wish to thoroughly evaluate your model's performance, please use the
34
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
35
-
36
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top). -->
37
-
38
-
39
-
40
-
41
-
42
-
43
- ---
44
-
45
- ## πŸ“• Table of Contents
46
-
47
- - [✨ File Origanization Structure](#filestructure)
48
- - [🍟 Usage](#usage)
49
- - [🌐 General-Bench](#bench)
50
- - [πŸ• Capabilities and Domians Distribution](#distribution)
51
- - [πŸ–ΌοΈ Image Task Taxonomy](#️imageTaxonomy)
52
- - [πŸ“½οΈ Video Task Taxonomy](#️videoTaxonomy)
53
- - [πŸ“ž Audio Task Taxonomy](#audioTaxonomy)
54
- - [πŸ’Ž 3D Task Taxonomy](#3dTaxonomy)
55
- - [πŸ“š Language Task Taxonomy](#languageTaxonomy)
56
-
57
- ---
58
-
59
- # ✨✨✨ **File Origanization Structure**<a name="filestructure" />
60
-
61
- Here is the organization structure of the file system:
62
-
63
- ```
64
- General-Bench
65
- β”œβ”€β”€ Image
66
- β”‚ β”œβ”€β”€ comprehension
67
- β”‚ β”‚ β”œβ”€β”€ Bird-Detection
68
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
69
- β”‚ β”‚ β”‚ └── images
70
- β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
71
- β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
72
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
73
- β”‚ β”‚ β”‚ └── images
74
- β”‚ β”‚ └── ...
75
- β”‚ └── generation
76
- β”‚ └── Layout-to-Face-Image-Generation
77
- β”‚ β”œβ”€β”€ annotation.json
78
- β”‚ └── images
79
- β”‚ └── ...
80
- β”œβ”€β”€ Video
81
- β”‚ β”œβ”€β”€ comprehension
82
- β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
83
- β”‚ β”‚ β”œβ”€β”€ annotation.json
84
- β”‚ β”‚ └── videos
85
- β”‚ β”‚ └── ...
86
- β”‚ └── generation
87
- β”‚ └── Scene-Image-to-Video-Generation
88
- β”‚ β”œβ”€β”€ annotation.json
89
- β”‚ └── videos
90
- β”‚ └── ...
91
- β”œβ”€β”€ 3d
92
- β”‚ β”œβ”€β”€ comprehension
93
- β”‚ β”‚ └── 3D-Furniture-Classification
94
- β”‚ β”‚ β”œβ”€β”€ annotation.json
95
- β”‚ β”‚ └── pointclouds
96
- β”‚ β”‚ └── ...
97
- β”‚ └── generation
98
- β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
99
- β”‚ β”œβ”€β”€ annotation.json
100
- β”‚ └── pointclouds
101
- β”‚ └── ...
102
- β”œβ”€β”€ Audio
103
- β”‚ β”œβ”€β”€ comprehension
104
- β”‚ β”‚ └── Accent-Classification
105
- β”‚ β”‚ β”œβ”€β”€ annotation.json
106
- β”‚ β”‚ └── audios
107
- β”‚ β”‚ └── ...
108
- β”‚ └── generation
109
- β”‚ └── Video-To-Audio
110
- β”‚ β”œβ”€β”€ annotation.json
111
- β”‚ └── audios
112
- β”‚ └── ...
113
- β”œβ”€β”€ NLP
114
- β”‚ β”œβ”€β”€ History-Question-Answering
115
- β”‚ β”‚ └── annotation.json
116
- β”‚ β”œβ”€β”€ Abstractive-Summarization
117
- β”‚ β”‚ └── annotation.json
118
- β”‚ └── ...
119
-
120
- ```
121
-
122
-
123
- An illustrative example of file formats:
124
-
125
-
126
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
127
-
128
-
129
-
130
- # 🍟🍟🍟 Usage<a name="Usage" />
131
-
132
- Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
133
-
134
- xxxx
135
-
136
-
137
-
138
- ---
139
-
140
- # 🌐🌐🌐 **General-Bench**<a name="bench" />
141
-
142
- A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
143
-
144
- <div align="center">
145
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
146
- <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
147
- comprehension and generation categories in various modalities</p>
148
- </div>
149
-
150
- ### πŸ•πŸ•πŸ• Capabilities and Domians Distribution<a name="distribution" />
151
-
152
- <div align="center">
153
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
154
- <p> Distribution of various capabilities evaluated in General-Bench.</p>
155
- </div>
156
-
157
-
158
- <div align="center">
159
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
160
- <p>Distribution of various domains and disciplines covered by General-Bench.</p>
161
- </div>
162
-
163
-
164
- ## πŸ–ΌοΈ Image Task Taxonomy<a name="imageTaxonomy" />
165
-
166
- <div align="center">
167
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
168
- <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
169
- </div>
170
-
171
-
172
- ## πŸ“½οΈ Video Task Taxonomy<a name="videoTaxonomy" />
173
-
174
- <div align="center">
175
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
176
- <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
177
- </div>
178
-
179
-
180
-
181
- ## πŸ“ž Audio Task Taxonomy<a name="audioTaxonomy" />
182
-
183
- <div align="center">
184
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
185
- <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
186
- </div>
187
-
188
- ## πŸ’Ž 3D Task Taxonomy<a name="3dTaxonomy" />
189
-
190
- <div align="center">
191
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
192
- <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
193
- </div>
194
-
195
-
196
- ## πŸ“š Language Task Taxonomy<a name="languageTaxonomy" />
197
-
198
- <div align="center">
199
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
200
- <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
201
- </div>
202
-
203
-
204
-
205
-
206
- ---
207
-
208
-
209
-
210
-
211
- ## 🚩 **Citation**
212
-
213
- If you find our benchmark useful in your research, please kindly consider citing us:
214
-
215
- ```
216
- @article{generalist2025,
217
- title={On Path to Multimodal Generalist: Levels and Benchmarks},
218
- author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
219
- journal={arXiv},
220
- year={2025}
221
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
  ```
 
1
+
2
+ <div align="center">
3
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
+
5
+ <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
+ <p align="center">
7
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
8
+ <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
+ <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
+ </p>
13
+
14
+
15
+ </div>
16
+
17
+
18
+ ---
19
+ We divide our benchmark into two settings: **`open`** and **`closed`**.
20
+
21
+ This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
+ It allows researchers to train and evaluate their models with access to the answers.
23
+
24
+ If you wish to thoroughly evaluate your model's performance, please use the
25
+ [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
+
27
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
28
+
29
+
30
+ <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
31
+
32
+ You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
33
+
34
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
35
+
36
+
37
+ If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+ ---
48
+
49
+ ## πŸ“• Table of Contents
50
+
51
+ - [✨ File Origanization Structure](#filestructure)
52
+ - [🍟 Usage](#usage)
53
+ - [🌐 General-Bench](#bench)
54
+ - [πŸ• Capabilities and Domians Distribution](#distribution)
55
+ - [πŸ–ΌοΈ Image Task Taxonomy](#️imagetaxonomy)
56
+ - [πŸ“½οΈ Video Task Taxonomy](#️videotaxonomy)
57
+ - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
58
+ - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
59
+ - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
60
+
61
+
62
+
63
+
64
+
65
+ ---
66
+
67
+ <span id='filestructure'/>
68
+
69
+ # ✨✨✨ **File Origanization Structure**
70
+
71
+ Here is the organization structure of the file system:
72
+
73
+ ```
74
+ General-Bench
75
+ β”œβ”€β”€ Image
76
+ β”‚ β”œβ”€β”€ comprehension
77
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
78
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
79
+ β”‚ β”‚ β”‚ └── images
80
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
81
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
82
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
83
+ β”‚ β”‚ β”‚ └── images
84
+ β”‚ β”‚ └── ...
85
+ β”‚ └── generation
86
+ β”‚ └── Layout-to-Face-Image-Generation
87
+ β”‚ β”œβ”€β”€ annotation.json
88
+ β”‚ └── images
89
+ β”‚ └── ...
90
+ β”œβ”€β”€ Video
91
+ β”‚ β”œβ”€β”€ comprehension
92
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
93
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
94
+ β”‚ β”‚ └── videos
95
+ β”‚ β”‚ └── ...
96
+ β”‚ └── generation
97
+ β”‚ └── Scene-Image-to-Video-Generation
98
+ β”‚ β”œβ”€β”€ annotation.json
99
+ β”‚ └── videos
100
+ β”‚ └── ...
101
+ β”œβ”€β”€ 3d
102
+ β”‚ β”œβ”€β”€ comprehension
103
+ β”‚ β”‚ └── 3D-Furniture-Classification
104
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
105
+ β”‚ β”‚ └── pointclouds
106
+ β”‚ β”‚ └── ...
107
+ β”‚ └── generation
108
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
109
+ β”‚ β”œβ”€β”€ annotation.json
110
+ β”‚ └── pointclouds
111
+ β”‚ └── ...
112
+ β”œβ”€β”€ Audio
113
+ β”‚ β”œβ”€β”€ comprehension
114
+ β”‚ β”‚ └── Accent-Classification
115
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
116
+ β”‚ β”‚ └── audios
117
+ β”‚ β”‚ └── ...
118
+ β”‚ └── generation
119
+ β”‚ └── Video-To-Audio
120
+ β”‚ β”œβ”€β”€ annotation.json
121
+ β”‚ └── audios
122
+ β”‚ └── ...
123
+ β”œβ”€β”€ NLP
124
+ β”‚ β”œβ”€β”€ History-Question-Answering
125
+ β”‚ β”‚ └── annotation.json
126
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
127
+ β”‚ β”‚ └── annotation.json
128
+ β”‚ └── ...
129
+
130
+ ```
131
+
132
+
133
+ An illustrative example of file formats:
134
+
135
+
136
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
137
+
138
+
139
+ <span id='usage'/>
140
+
141
+ ## 🍟🍟🍟 Usage
142
+
143
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
144
+
145
+ xxxx
146
+
147
+
148
+ ---
149
+
150
+
151
+
152
+
153
+
154
+ <span id='bench'/>
155
+
156
+
157
+
158
+ # 🌐🌐🌐 **General-Bench**
159
+
160
+
161
+
162
+
163
+ A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
164
+
165
+ <div align="center">
166
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
167
+ <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
168
+ comprehension and generation categories in various modalities</p>
169
+ </div>
170
+
171
+
172
+ <span id='distribution'/>
173
+
174
+ ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
175
+
176
+ <div align="center">
177
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
178
+ <p> Distribution of various capabilities evaluated in General-Bench.</p>
179
+ </div>
180
+
181
+
182
+ <div align="center">
183
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
184
+ <p>Distribution of various domains and disciplines covered by General-Bench.</p>
185
+ </div>
186
+
187
+
188
+
189
+
190
+
191
+ <span id='imagetaxonomy'/>
192
+
193
+ # πŸ–ΌοΈ Image Task Taxonomy<a name="imageTaxonomy" />
194
+ <div align="center">
195
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
196
+ <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
197
+ </div>
198
+
199
+
200
+ <span id='videotaxonomy'/>
201
+
202
+ # πŸ“½οΈ Video Task Taxonomy<a name="videoTaxonomy" />
203
+
204
+ <div align="center">
205
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
206
+ <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
207
+ </div>
208
+
209
+
210
+ <span id='audiotaxonomy'/>
211
+
212
+ # πŸ“ž Audio Task Taxonomy<a name="audioTaxonomy" />
213
+
214
+
215
+
216
+ <div align="center">
217
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
218
+ <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
219
+ </div>
220
+
221
+
222
+
223
+ <span id='3dtaxonomy'/>
224
+
225
+ # πŸ’Ž 3D Task Taxonomy<a name="3dTaxonomy" />
226
+
227
+
228
+ <div align="center">
229
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
230
+ <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
231
+ </div>
232
+
233
+
234
+
235
+
236
+ <span id='languagetaxonomy'/>
237
+
238
+ # πŸ“š Language Task Taxonomy<a name="languageTaxonomy" />
239
+
240
+ <div align="center">
241
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
242
+ <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
243
+ </div>
244
+
245
+
246
+
247
+
248
+ ---
249
+
250
+
251
+
252
+
253
+ # 🚩 **Citation**
254
+
255
+ If you find our benchmark useful in your research, please kindly consider citing us:
256
+
257
+ ```
258
+ @article{generalist2025,
259
+ title={On Path to Multimodal Generalist: Levels and Benchmarks},
260
+ author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
261
+ journal={arXiv},
262
+ year={2025}
263
+ }
264
  ```