Datasets:

Modalities:
Image
Languages:
English
Libraries:
Datasets
License:
yifan-Eva commited on
Commit
2cc1ad2
Β·
verified Β·
1 Parent(s): 640d342

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -6
README.md CHANGED
@@ -43,7 +43,7 @@ The dataset is divided into several subsets based on the distribution of object
43
  | Hom. | 490 | 289 | 201 |
44
  | Adv. | 334 | 170 | 164 |
45
 
46
- ### Dataset Structure
47
 
48
  <!-- Provide a longer summary of what this dataset is. -->
49
 
@@ -175,8 +175,79 @@ The dataset is divided into several subsets based on the distribution of object
175
  }
176
 
177
  ```
 
 
178
 
179
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
180
  ## Dataset Construction
181
 
182
  The dataset used in this study is constructed following the guidelines and protocols outlined by the SLED group. Detailed information and code about the data annotation process can be found in the official repository.
@@ -191,9 +262,10 @@ For more information, please visit the [dataset construction guidelines](https:/
191
  **BibTeX:**
192
 
193
  ```bibtex
194
- @inproceedings{chen2024multiobject,
195
- title={Multi-Object Hallucination in Vision Language Models},
196
- author={Chen, Xuweiyi and Ma, Ziqiao and Zhang, Xuejun and Xu, Sihan and Qian, Shengyi and Yang, Jianing and Fouhey, David and Chai, Joyce},
197
- booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)},
198
- year={2024}
 
199
  }
 
43
  | Hom. | 490 | 289 | 201 |
44
  | Adv. | 334 | 170 | 164 |
45
 
46
+ ## Dataset Structure
47
 
48
  <!-- Provide a longer summary of what this dataset is. -->
49
 
 
175
  }
176
 
177
  ```
178
+ ## Dataset File Structure
179
+ The `ROPE` dataset is structured into training and validation directories, each containing images divided by their object class distributions. Each image directory includes visualizations of bounding boxes (`bbox`) and raw images (`raw`), further categorized into `ADE` and `COCO` sources. The `raw` directory contains the original images, while the `bbox` directory contains the same images with bounding boxes visualized on them.
180
 
181
 
182
+
183
+ β€˜β€™β€˜
184
+ ROPE/
185
+ β”‚
186
+ β”œβ”€β”€ train/
187
+ β”‚ β”œβ”€β”€ image/
188
+ β”‚ β”‚ β”œβ”€β”€ AAAAB-images/
189
+ β”‚ β”‚ β”‚ β”œβ”€β”€ bbox/
190
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
191
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
192
+ β”‚ β”‚ β”‚ β”œβ”€β”€ raw/
193
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
194
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
195
+ β”‚ β”‚ β”œβ”€β”€ BAAAA-images/
196
+ β”‚ β”‚ β”‚ β”œβ”€β”€ bbox/
197
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
198
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
199
+ β”‚ β”‚ β”‚ β”œβ”€β”€ raw/
200
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
201
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
202
+ β”‚ β”‚ β”œβ”€β”€ heterogenous-images/
203
+ β”‚ β”‚ β”‚ β”œβ”€β”€ bbox/
204
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
205
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
206
+ β”‚ β”‚ β”‚ β”œβ”€β”€ raw/
207
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
208
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
209
+ β”‚ β”‚ β”œβ”€β”€ homogenous-images/
210
+ β”‚ β”‚ β”‚ β”œβ”€β”€ bbox/
211
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
212
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
213
+ β”‚ β”‚ β”‚ β”œβ”€β”€ raw/
214
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
215
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
216
+ β”‚ β”‚ β”œβ”€β”€ mixed-images/
217
+ β”‚ β”‚ β”‚ β”œβ”€β”€ bbox/
218
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
219
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
220
+ β”‚ β”‚ β”‚ β”œβ”€β”€ raw/
221
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ ADE/
222
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ COCO/
223
+ β”‚ β”œβ”€β”€ AAAAB_data.json
224
+ β”‚ β”œβ”€β”€ BAAAA_data.json
225
+ β”‚ β”œβ”€β”€ merged_heterogenous_data.json
226
+ β”‚ β”œβ”€β”€ merged_homogenous_data.json
227
+ β”‚ β”œβ”€β”€ merged_mixed_data.json
228
+ β”‚
229
+ β”œβ”€β”€ validation/ #similar to train part
230
+ β”‚ β”œβ”€β”€ image/
231
+ β”‚ β”‚ β”œβ”€β”€ AAAAB-images/
232
+ β”‚ β”‚ β”œβ”€β”€ BAAAA-images/
233
+ β”‚ β”‚ β”œβ”€β”€ heterogenous-images/
234
+ β”‚ β”‚ β”œβ”€β”€ homogenous-images/
235
+ β”‚ β”‚ β”œβ”€β”€ mixed-images/
236
+ β”‚ β”œβ”€β”€ AAAAB_data.json
237
+ β”‚ β”œβ”€β”€ BAAAA_data.json
238
+ β”‚ β”œβ”€β”€ merged_heterogenous_data.json
239
+ β”‚ β”œβ”€β”€ merged_homogenous_data.json
240
+ β”‚ β”œβ”€β”€ merged_mixed_data.json
241
+ β”‚
242
+ β”œβ”€β”€ .gitattributes
243
+ β”œβ”€β”€ README.md
244
+ β”œβ”€β”€ train.zip
245
+ β”œβ”€β”€ validation.zip
246
+
247
+
248
+
249
+ β€™β€˜β€™
250
+
251
  ## Dataset Construction
252
 
253
  The dataset used in this study is constructed following the guidelines and protocols outlined by the SLED group. Detailed information and code about the data annotation process can be found in the official repository.
 
262
  **BibTeX:**
263
 
264
  ```bibtex
265
+ @misc{xuweiyi2024multiobjecthallucination,
266
+ title={Multi-Object Hallucination in Vision-Language Models},
267
+ author={Xuweiyi Chen and Ziqiao Ma and Xuejun Zhang and Sihan Xu and Shengyi Qian and Jianing Yang and David Fouhey and Joyce Y. Chai},
268
+ year={2024},
269
+ archivePrefix={arXiv},
270
+ primaryClass={cs.CV}
271
  }