AV_Odyssey_Bench / README.md
BreakLee's picture
Update README.md
762e650 verified
|
raw
history blame
3.98 kB
---
license: apache-2.0
task_categories:
- question-answering
- multiple-choice
- visual-question-answering
language:
- en
tags:
- music
pretty_name: AV_Odyssey
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: question_id
dtype: string
- name: question_type_id
dtype: int16
- name: data_type
dtype: string
- name: subfield
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: image_1
sequence: image
- name: image_2
sequence: image
- name: image_3
sequence: image
- name: image_4
sequence: image
- name: video_1
sequence: video
- name: audio_1
sequence: audio
- name: audio_2
sequence: audio
- name: audio_3
sequence: audio
- name: audio_4
sequence: audio
splits:
- name: test
num_bytes: 27221062957.18
num_examples: 4555
download_size: 27159381702
dataset_size: 27221062957.18
configs:
- config_name: default
data_files:
- split: test
path: av_odyssey_part*
---
Official dataset for the paper "[AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?]()".
๐ŸŒŸ For more details, please refer to the project page with data examples: [https://av-odyssey.github.io/](https://av-odyssey.github.io/).
[[๐ŸŒ Webpage](https://av-odyssey.github.io/)] [[๐Ÿ“– Paper]()] [[๐Ÿค— Huggingface AV-Odyssey Dataset](https://huggingface.co/datasets/AV-Odyssey/AV_Odyssey_Bench)] [[๐Ÿค— Huggingface Deaftest Dataset](https://huggingface.co/datasets/AV-Odyssey/Deaftest_dataset)] [[๐Ÿ† Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard)]
---
## ๐Ÿ”ฅ News
* **`2024.11.24`** ๐ŸŒŸ We release AV-Odyssey, the first-ever comprehensive evaluation benchmark to explore whether MLLMs really understand audio-visual information.
## ๐Ÿ‘€ About AV-Odyssey
Recently, multimodal large language models (MLLMs), such as GPT-4o, Gemini 1.5 Pro, and Reka Core, have expanded their capabilities to include vision and audio modalities. While these models demonstrate impressive performance across a wide range of audio-visual applications, our proposed **DeafTest** reveals that MLLMs often struggle with simple tasks humans find trivial: 1) determining which of two sounds is louder, and 2) determining which of two sounds has a higher pitch. Motivated by these observations, we introduce **AV-Odyssey Bench**. This benchmark encompasses **26** different tasks and **4,555** carefully crafted problems, each incorporating text, visual, and audio components. All data are **newly collected and annotated by humans**, not from any existing audio-visual dataset. AV-Odyssey Bench demonstrates three major features: 1. **Comprehensive** Audio Attributes; 2. **Extensive** Domains; 3. **Interleaved** Text, Audio, and Visual components.
<img src="/assets/intro.png" style="zoom:50%;" />
## ๐Ÿ“ Data Examples
Please refer to our project page https://av-odyssey.github.io/ for exploring more examples.
### ๐Ÿ“AV-Odyssey Bench
<div align="center">
<img src="assets/demo-1.svg" width="100%" />
</div>
## ๐Ÿ” Dataset
**License**:
```
AV-Odyssey is only used for academic research. Commercial use in any form is prohibited.
The copyright of all videos belongs to the video owners.
If there is any infringement in AV-Odyssey, please email [email protected] and we will remove it immediately.
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify AV-Odyssey in whole or in part.
You must strictly comply with the above restrictions.
```
Please send an email to **[[email protected]](mailto:[email protected])**. ๐ŸŒŸ
## ๐Ÿ”ฎ Evaluation Pipeline
## ๐Ÿ† Leaderboard
### Contributing to the AV-Odyssey Leaderboard
๐Ÿšจ The [Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard) for AV-Odyssey is continuously being updated, welcoming the contribution of your excellent MLLMs!