Datasets:

Modalities:
Image
Languages:
English
Libraries:
Datasets
License:
File size: 7,650 Bytes
77e8cfb
2115cf0
b5defd1
 
a6d0ea5
 
 
 
 
 
b4939b8
a6d0ea5
 
a58c919
 
a6d0ea5
5f23ef5
 
 
 
 
 
 
7ff7fa2
5f23ef5
 
3ef81a7
 
 
 
169a35c
3ef81a7
 
 
 
 
 
 
 
169a35c
3ef81a7
 
 
 
a6d0ea5
92f4962
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6d0ea5
 
 
4a22566
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6d0ea5
6488c76
2cc1ad2
 
24f8225
a6d0ea5
24f8225
 
 
a6d0ea5
 
1295430
a6d0ea5
 
 
 
 
1295430
d367ae9
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
---
task_categories:
- question-answering
- text-classification
license: apache-2.0
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for ROPE

<!-- Provide a quick summary of the dataset. -->
The dataset used in this study is designed to evaluate and analyze multi-object hallucination by leveraging existing panoptic segmentation datasets. Specifically, it includes data from MSCOCO-Panoptic and ADE20K, ensuring access to diverse objects and their instance-level semantic annotations. 
For more information, please visit [Multi-Object Hallucination](https://multi-object-hallucination.github.io).

## Dataset Construction

The dataset is divided into several subsets based on the distribution of object classes within each image at test time. This division allows for a more granular analysis of how different distributions affect the hallucination behavior of large vision-language models (LVLMs).

- **Homogeneous**: All tested objects in an image belong to the same class (e.g., AAAAA).
- **Heterogeneous**: All tested objects in an image belong to different classes (e.g., ABCDE).
- **In-the-Wild**: A mixed distribution where the tested objects are randomly chosen and ordered within each image.
- **Adversarial**: A subset designed to challenge the models with difficult object distributions(AAAAB,BAAAA).


## Dataset Statistics
### Training Data Statistics

| Dataset | Total | COCO | ADE |
| :---: | :---: | :---: | :---: |
| Wild | 1539 | 732 | 807 |
| Hom. | 312 | 168 | 144 |
| Het. | 400 | 200 | 200 |
| Adv. | 168 | 54 | 114 |

### Validation Data Statistics

| Dataset | Total | COCO | ADE |
| :---: | :---: | :---: | :---: |
| Wild | 1172 | 547 | 625 |
| Het. | 246 | 76 | 170 |
| Hom. | 490 | 289 | 201 |
| Adv. | 334 | 170 | 164 |

## Dataset File Structure
The `ROPE` dataset is structured into training and validation directories, each containing images divided by their object class distributions. Each image directory includes visualizations of bounding boxes (`bbox`) and raw images (`raw`), further categorized into `ADE` and `COCO` sources. The `raw` directory contains the original images, while the `bbox` directory contains the same images with bounding boxes visualized on them.



```arduino
ROPE/
β”‚
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ image/
β”‚   β”‚   β”œβ”€β”€ AAAAB-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ BAAAA-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ heterogenous-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ homogenous-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ mixed-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”œβ”€β”€ AAAAB_data.json
β”‚   β”œβ”€β”€ BAAAA_data.json
β”‚   β”œβ”€β”€ merged_heterogenous_data.json
β”‚   β”œβ”€β”€ merged_homogenous_data.json
β”‚   β”œβ”€β”€ merged_mixed_data.json
β”‚
β”œβ”€β”€ validation/  #similar to train part
β”‚   β”œβ”€β”€ image/
β”‚   β”‚   β”œβ”€β”€ AAAAB-images/  
β”‚   β”‚   β”œβ”€β”€ BAAAA-images/
β”‚   β”‚   β”œβ”€β”€ heterogenous-images/
β”‚   β”‚   β”œβ”€β”€ homogenous-images/
β”‚   β”‚   β”œβ”€β”€ mixed-images/
β”‚   β”œβ”€β”€ AAAAB_data.json
β”‚   β”œβ”€β”€ BAAAA_data.json
β”‚   β”œβ”€β”€ merged_heterogenous_data.json
β”‚   β”œβ”€β”€ merged_homogenous_data.json
β”‚   β”œβ”€β”€ merged_mixed_data.json
β”‚
β”œβ”€β”€ .gitattributes
β”œβ”€β”€ README.md
β”œβ”€β”€ train.zip
β”œβ”€β”€ validation.zip

```

## Json file Structure

<!-- Provide a longer summary of what this dataset is. -->

```json
{
  "features": [
    {
      "name": "folder",
      "dtype": "string"
    },
    {
      "name": "filename",
      "dtype": "string"
    },
    {
      "name": "source",
      "dtype": "struct",
      "fields": [
        {
          "name": "database",
          "dtype": "string"
        },
        {
          "name": "image_id",
          "dtype": "string"
        },
        {
          "name": "coco_id",
          "dtype": "string"
        },
        {
          "name": "flickr_id",
          "dtype": "string"
        }
      ]
    },
    {
      "name": "size",
      "dtype": "struct",
      "fields": [
        {
          "name": "width",
          "dtype": "int32"
        },
        {
          "name": "height",
          "dtype": "int32"
        },
        {
          "name": "depth",
          "dtype": "int32"
        }
      ]
    },
    {
      "name": "segmented",
      "dtype": "int32"
    },
    {
      "name": "objects",
      "dtype": "list",
      "item": {
        "dtype": "struct",
        "fields": [
          {
            "name": "name",
            "dtype": "string"
          },
          {
            "name": "object_id",
            "dtype": "string"
          },
          {
            "name": "difficult",
            "dtype": "int32"
          },
          {
            "name": "bndbox",
            "dtype": "struct",
            "fields": [
              {
                "name": "xmin",
                "dtype": "int32"
              },
              {
                "name": "ymin",
                "dtype": "int32"
              },
              {
                "name": "xmax",
                "dtype": "int32"
              },
              {
                "name": "ymax",
                "dtype": "int32"
              }
            ]
          },
          {
            "name": "area",
            "dtype": "int32"
          },
          {
            "name": "bbox_number",
            "dtype": "int32"
          }
        ]
      }
    },
    {
      "name": "relations",
      "dtype": "list",
      "item": {
        "dtype": "string"
      }
    },
    {
      "name": "object_set",
      "dtype": "list",
      "item": {
        "dtype": "string"
      }
    },
    {
      "name": "data_source",
      "dtype": "string"
    }
  ]
}

```


## Dataset Construction 

The dataset used in this study is constructed following the guidelines and protocols outlined by the SLED group. Detailed information and code about the data annotation process can be found in the official repository.

For more information, please visit the [dataset construction guidelines](https://github.com/sled-group/moh/tree/main/data-annotation).


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@inproceedings{chen2024multiobject,
  title={Multi-Object Hallucination in Vision Language Models},
  author={Chen, Xuweiyi and Ma, Ziqiao and Zhang, Xuejun and Xu, Sihan and Qian, Shengyi and Yang, Jianing and Fouhey, David and Chai, Joyce},
  booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)},
  year={2024}
}