Update README.md
Browse files
README.md
CHANGED
@@ -61,10 +61,29 @@ language:
|
|
61 |
size_categories:
|
62 |
- 1K<n<10K
|
63 |
---
|
|
|
64 |
|
65 |
## Dataset Description
|
66 |
|
67 |
- **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
|
68 |
-
- **Paper:** [2311.17005](https://arxiv.org/abs/2311.
|
69 |
- **Point of Contact:** mailto:[kunchang li]([email protected])
|
70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
size_categories:
|
62 |
- 1K<n<10K
|
63 |
---
|
64 |
+
# MVBench
|
65 |
|
66 |
## Dataset Description
|
67 |
|
68 |
- **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
|
69 |
+
- **Paper:** [2311.17005](https://arxiv.org/abs/2311.17005)
|
70 |
- **Point of Contact:** mailto:[kunchang li]([email protected])
|
71 |
|
72 |
+
![images](./assert/generation.png)
|
73 |
+
|
74 |
+
We introduce a novel static-to-dynamic method for defining temporal-related tasks. By converting static tasks into dynamic ones, we facilitate systematic generation of video tasks necessitating a wide range of temporal abilities, from perception to cognition. Guided by task definitions, we then **automatically transform public video annotations into multiple-choice QA** for task evaluation. This unique paradigm enables efficient creation of MVBench with minimal manual intervention while ensuring evaluation fairness through ground-truth video annotations and avoiding biased LLM scoring. The **20** temporal task examples are as follows.
|
75 |
+
|
76 |
+
![images](./assert/task_example.png)
|
77 |
+
|
78 |
+
## :telescope: Evaluation
|
79 |
+
|
80 |
+
An evaluation example is provided in [mvbench.ipynb](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb). Please follow the pipeline to prepare the evaluation code for various MLLMs.
|
81 |
+
|
82 |
+
- **Preprocess**: We preserve the raw video (high resolution, long duration, etc.) along with corresponding annotations (start, end, subtitles, etc.) for future exploration; hence, the decoding of some raw videos like Perception Test may be slow.
|
83 |
+
- **Prompt**: We explore effective system prompts to encourage better temporal reasoning in MLLM, as well as efficient answer prompts for option extraction.
|
84 |
+
|
85 |
+
## :bar_chart: Leadrboard
|
86 |
+
|
87 |
+
While an [Online leaderboard]() is under construction, the current standings are as follows:
|
88 |
+
|
89 |
+
![images](./assert/leaderboard.png)
|