--- language: - ru license: cc-by-4.0 size_categories: - 1K.\nОтветьте кратко на вопрос. В качестве ответа напишите слово в той же форме, как спрашивается в вопросе, без дополнительных рассуждений, либо цифру, если ответом является число.\nВопрос:{question}\nОтвет:", "inputs": { "image": "samples/sample1.jpg", "question": "Какого цвета комбинезон девушки?" }, "outputs": "Белого", "meta": { "id": 123, "categories": { "question_type": "which" }, "image": { "source": "photo" }, "complexity": "simple_question" } } ``` ### Prompts For the task, 10 prompts were prepared and evenly distributed among the questions on the principle of "one prompt per question". The templates in curly braces in each prompt are filled in from the fields inside the `inputs` field in each question. Prompt example: ``` Посмотри на изображение и ответь на вопрос по этой картинке. Ответ пиши в той же форме, как спрашивается в вопросе, без дополнительных рассуждений, числа пиши не текстом, а цифрой. Вопрос:{question} Ответ: ``` ### Dataset creation The dataset was created using images from the English VQA v2 dataset (which includes data from the COCO dataset). Using the ABC Elementary platform, annotators generated questions and answers for the images from scratch, for each image 3 questions were created and each image was annotated by three annotators. The resulting data was then aggregated and filtered both automatically (to remove long answers, typos, and formatting issues) and manually. The binary question data is balanced across classes. ## Evaluation ### Metrics Metrics for aggregated evaluation of responses: - `Exact match`: Exact match is the average of scores for all processed cases, where a given case score is 1 if the predicted string is the exact same as its reference string, and is 0 otherwise.