--- license: cc-by-4.0 task_categories: - visual-question-answering modalities: - Video - Text configs: - config_name: action_antonym data_files: json/action_antonym.json - config_name: action_count data_files: json/action_count.json - config_name: action_localization data_files: json/action_localization.json - config_name: action_sequence data_files: json/action_sequence.json - config_name: egocentric_sequence data_files: json/egocentric_sequence.json - config_name: moving_direction data_files: json/moving_direction.json - config_name: object_count data_files: json/object_count.json - config_name: object_shuffle data_files: json/object_shuffle.json - config_name: scene_transition data_files: json/scene_transition.json - config_name: unexpected_action data_files: json/unexpected_action.json language: - en size_categories: - 1K drawing ## Download Question and answers are provided as a json file for each task. Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder. ## Leaderboard ![image](figs/sota.png) # Citation If you find this benchmark useful, please consider citing: ``` @misc{cores2024tvbench, author = {Daniel Cores and Michael Dorkenwald and Manuel Mucientes and Cees G. M. Snoek and Yuki M. Asano}, title = {TVBench: Redesigning Video-Language Evaluation}, year = {2024}, eprint = {arXiv:2410.07752}, } ```