--- annotations_creators: - machine-generated language_creators: - machine-generated language: - en license: - cc multilinguality: - monolingual size_categories: - 1M This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description VOILA is an open-ended, large-scale and dynamic dataset which evaluates the visual understanding and relational reasoning capability of the MLLMs. It consists of distinct visual analogy questions designed to derive an answer by following the relation rules among a given triplet of images (A : A’ :: B : B’). Unlike previous visual analogy dataset, VOILA presents more complex rule-based structure incorporating various property relations and distraction rules and manipulation of up to three properties at a time across 14 subject types, 13 actions, and 4 numeric values. VOILA comprises two sub-tasks: the more complex VOILA-WD and the simpler VOILA-ND Our experiment results show state-of-the-art models not only struggle to apply the relationship to a new set of images but also to reveal the relationship between images. LLaMa 3.2 achieves the highest performance, attaining 13% accuracy in implementing the relationship stage on VOILA-WD. Interestingly, GPT-4o outperforms other models on VOILA-ND, achieving an accuracy of 29% in applying relationships. However, human performance significantly surpasses these results, achieving 71% and 69% accuracy on VOILA-WD and VOILA-ND, respectively. - **Curated by:** [More Information Needed] - **Language(s) (NLP):** English - **License:** cc - **Contact:** nyilmaz3@asu.edu ### Dataset Sources [optional] - **Repository:** https://github.com/nlylmz/Voila - **Paper :** VOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning ## Uses ### Direct Use [More Information Needed] ## Dataset Structure ``` {‘img1': 'two_hamsters_carrying something_1111.png', ‘img2': 'two_hamsters_walking_9111.png’, ‘img3': ‘four_cats_carrying something_11111.png’, ‘img4’: ‘four cats walking’, ‘desc_img1’: 'two hamsters carrying something’, ‘desc_img2': ‘two hamsters walking’, ‘desc_img3':’four cats carrying something’, ‘desc_im4’: ‘four cats walking’, ‘combined_description’: ‘Image 1: two hamsters carrying something. Image 2: two hamsters walking. Image 3: four cats carrying something’, ‘question’: ‘image_questions_1.png’, ‘rule’ : ‘1’, ‘Real_relations’ : ‘Number remains constant two. Action is changed from carrying something to walking. Subject type remains constant hamsters.’} ``` ### Data Fields - `id`: - `img1`: the file name of the first input image - `img2`: the file name of the second input image - `img3`: the file name of the third input image - `img4`: the content of the fourth image – analogy solution - `desc_img1`: description of the first image - `desc_img2`: description of the second image - `desc_img3`: description of the third image - `desc_im4`: description of the solution image - `combined_description`: The combined content description of first three images. question: the file name of the image collage which combine the first three images for analogy question. - `rule`: the number of the rule configuration. - `Real_relations `: the changed and unchanged properties between first and second images. ### Data Splits - VOILA_WD : There are approximately 10K image analogy questions for TEST case which includes Distraction rule. - VOILA_ND : There are approximately 3.6K image analogy questions for TEST case, excluding Distraction rule. ## Dataset Creation ### Curation Rationale [More Information Needed] #### Data Collection and Processing [More Information Needed] #### Who are the source data producers? [More Information Needed] ## Bias, Risks, and Limitations Because the images are generated by Stable Diffusion XL (SDXL). They might reveal biases that the model possesses. ## Citation **BibTeX:** ``` @inproceedings{ yilmaz2025voila, title={Voila: Evaluation of {MLLM}s For Perceptual Understanding and Analogical Reasoning}, author={Nilay Yilmaz and Maitreya Patel and Yiran Lawrence Luo and Tejas Gokhale and Chitta Baral and Suren Jayasuriya and Yezhou Yang}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=q5MUMlHxpd} } ```