Datasets:
File size: 7,741 Bytes
f30f4e0 3c1d352 f30f4e0 9514cc6 1683204 9514cc6 3c1d352 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: inputs
struct:
- name: image
dtype:
image:
decode: false
- name: question
dtype: string
- name: outputs
dtype: string
- name: meta
struct:
- name: id
dtype: int32
- name: question_type
dtype: string
- name: image
struct:
- name: synt_source
sequence: string
- name: type
dtype: string
splits:
- name: shots
num_bytes: 1503376
num_examples: 10
- name: test
num_bytes: 257029579
num_examples: 2300
download_size: 257313901
dataset_size: 258532955
configs:
- config_name: default
data_files:
- split: shots
path: data/shots-*
- split: test
path: data/test-*
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- ru
pretty_name: ruCLEVR
size_categories:
- 1K<n<10K
---
# ruCLEVR
## Task description
RuCLEVR is a Visual Question Answering (VQA) dataset inspired by the [CLEVR](https://cs.stanford.edu/people/jcjohns/clevr/) methodology and adapted for the Russian language.
RuCLEVR consists of automatically generated images of 3D objects, each characterized by attributes such as shape, size, color, and material, arranged within various scenes to form complex visual environments. The dataset includes questions based on these images, organized into specific families such as querying attributes, comparing attributes, existence, counting, and integer comparison. Each question is formulated using predefined templates to ensure consistency and variety. The set was created from scratch to prevent biases. Questions are designed to assess the models' ability to perform tasks that require accurate visual reasoning by analyzing the attributes and relationships of objects in each scene. Through this structured design, the dataset provides a controlled environment for evaluating the precise reasoning skills of models when presented with visual data.
Evaluated skills: Common everyday knowledge, Spatial object relationship, Object recognition, Physical property understanding, Static counting, Comparative reasoning
Contributors: Ksenia Biryukova, Daria Chelnokova, Jamilya Erkenova, Artem Chervyakov, Maria Tikhonova
## Motivation
The RuCLEVR dataset was created to evaluate the visual reasoning capabilities of multimodal language models, specifically in the Russian language, where there is a lack of diagnostic datasets for such tasks. It aims to assess models' abilities to reason about shapes, colors, quantities, and spatial relationships in visual scenes, moving beyond simple language understanding to test compositional reasoning. This is crucial for models that are expected to analyze visual data and perform tasks requiring logical inferences about object interactions. The dataset's design, which uses structured question families, ensures that the evaluation is comprehensive and unbiased, focusing on the models' reasoning skills rather than pattern recognition.
## Data description
### Data fields
Each dataset question includes data in the following fields:
- `instruction` [str] — Instruction prompt template with question elements placeholders.
- `inputs` — Input data that forms the task for the model. Can include one or multiple modalities - video, audio, image, text.
- `image` [str] — Path to the image file related to the question.
- `question` [str] — Text of the question.
- `outputs` [str] — The correct answer to the question.
- `meta` — Metadata related to the test example, not used in the question (hidden from the tested model).
- `id` [int] — Identification number of the question in the dataset.
- `question_type` [str] — Question type according to possible answers: binary, colors, count, materials, shapes, size.
- `image` — Image metadata.
- `synt_source` [list] — Sources used to generate or recreate data for the question, including names of generative models.
- `type` [str] — Image type — according to the image classification for MERA datasets.
### Data formatting example
```json
{
"instruction": "Даны вопрос и картинка, необходимая для ответа на вопрос. Посмотри на изображение и дай ответ на вопрос. Ответом является одна цифра или одно слово в начальной форме.\nИзображение:<image>\nВопрос:{question}\nОтвет:",
"inputs": {
"image": "samples/image0123.png",
"question": "Одинаков ли цвет большой металлической сферы и матового блока?"
},
"outputs": "да",
"meta": {
"id": 17,
"question_type": "binary",
"image": {
"synt_source": [
"blender"
],
"type": "generated"
}
}
}
```
### Prompts
For the task, 10 prompts were prepared and evenly distributed among the questions on the principle of "one prompt per question". The templates in curly braces in each prompt are filled in from the fields inside the `inputs` field in each question.
Prompt example:
```
Даны вопрос и картинка, необходимая для ответа на вопрос. Посмотри на изображение и дай ответ на вопрос. Ответом является одна цифра или одно слово в начальной форме.
Изображение:<image>
Вопрос:{question}
Ответ:
```
### Dataset creation
To create RuCLEVR, we used two strategies: 1) generation of the new samples and 2) data augmentation with color replacement. Below, each technique is described in more detail:
**Generation of the New Samples**: We generated new, unique images and corresponding questions from scratch. This process involved a multi-step process to ensure a controlled and comprehensive evaluation of visual reasoning. First, 3D images were automatically generated using Blender, featuring objects with specific attributes such as shape, size, color, and material. These objects were arranged in diverse configurations to create complex scenes. Questions with the corresponding answers were then generated based on predefined templates, which structured the inquiries into families, such as attribute queries and comparisons. To avoid conjunction errors, we stick to the original format and generate questions in English, further translating them into Russian using Google Translator. After generation, we automatically filtered incorrectly translated questions using the [model](https://huggingface.co/RussianNLP/ruRoBERTa-large-rucola) pertained to the linguistic acceptability task. In addition, we checked the dataset for the absence of duplicates.
**Data Augmentation with Color Replacement**: We also augmented the dataset modifying the images from the validation set of the original CLEVER. Specifically, we developed a [script](https://github.com/erkenovaj/RuCLEVR/tree/main) to systematically replace colors in questions and images according to predefined rules, thereby creating new augmented samples. This process was initially conducted in English to avoid morphological complexities. Once the questions were augmented, they were translated into Russian and verified for grammatical correctness.
## Evaluation
### Metrics
Metrics for aggregated evaluation of responses:
- `Exact match`: Exact match is the average of scores for all processed cases, where a given case score is 1 if the predicted string is the exact same as its reference string, and is 0 otherwise. |