---
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: Deaftest
size_categories:
- n<1K
dataset_info:
features:
- name: question_id
dtype: string
- name: question_type_id
dtype: string
- name: data_type
dtype: string
- name: subfield
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: video_1
dtype: string
- name: audio_1
dtype: audio
- name: audio_2
dtype: audio
- name: audio_3
dtype: audio
- name: audio_4
dtype: audio
splits:
- name: test
num_bytes: 2722106.18
num_examples: 400
download_size: 2715938
dataset_size: 2722106.18
configs:
- config_name: default
data_files:
- split: test
path: deaftest.parquet
---
Official Deaftest dataset for the paper "[AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?]()".
🌟 For more details, please refer to the project page with data examples: [https://av-odyssey.github.io/](https://av-odyssey.github.io/).
[[🌐 Webpage](https://av-odyssey.github.io/)] [[📖 Paper](https://arxiv.org/abs/2412.02611)] [[🤗 Huggingface AV-Odyssey Dataset](https://huggingface.co/datasets/AV-Odyssey/AV_Odyssey_Bench)] [[🤗 Huggingface Deaftest Dataset](https://huggingface.co/datasets/AV-Odyssey/Deaftest_dataset)] [[🏆 Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard)]
---
## 🔥 News
* **`2024.11.24`** 🌟 We release AV-Odyssey, the first-ever comprehensive evaluation benchmark to explore whether MLLMs really understand audio-visual information.
## 👀 About AV-Odyssey
Recently, multimodal large language models (MLLMs), such as GPT-4o, Gemini 1.5 Pro, and Reka Core, have expanded their capabilities to include vision and audio modalities. While these models demonstrate impressive performance across a wide range of audio-visual applications, our proposed **DeafTest** reveals that MLLMs often struggle with simple tasks humans find trivial: 1) determining which of two sounds is louder, and 2) determining which of two sounds has a higher pitch. Motivated by these observations, we introduce **AV-Odyssey Bench**. This benchmark encompasses **26** different tasks and **4,555** carefully crafted problems, each incorporating text, visual, and audio components. All data are **newly collected and annotated by humans**, not from any existing audio-visual dataset. AV-Odyssey Bench demonstrates three major features: 1. **Comprehensive** Audio Attributes; 2. **Extensive** Domains; 3. **Interleaved** Text, Audio, and Visual components.
## 📐 Data Examples
Please refer to our project page https://av-odyssey.github.io/ for exploring more examples.
### 📍AV-Odyssey Bench