Datasets:
Link dataset to paper
Browse filesThis PR links the dataset to https://huggingface.co/papers/2406.05967
README.md
CHANGED
@@ -73,6 +73,8 @@ configs:
|
|
73 |
|
74 |
CVQA is a culturally diverse multilingual VQA benchmark consisting of over 9,000 questions from 33 country-language pairs. The questions in CVQA are written in both the native languages and English, and are categorized into 10 diverse categories.
|
75 |
|
|
|
|
|
76 |
This data is designed for use as a test set. Please [submit your submission here](https://eval.ai/web/challenges/challenge-page/2305/) to evaluate your model performance. CVQA is constructed through a collaborative effort led by a team of researchers from MBZUAI. Read more about CVQA in this paper.
|
77 |
|
78 |
![CVQA statistics](https://cvqa-benchmark.org/images/full-statistics.png)
|
@@ -142,4 +144,16 @@ Note that each question has its own license. All data here is free to use for re
|
|
142 |
|
143 |
---
|
144 |
|
145 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
CVQA is a culturally diverse multilingual VQA benchmark consisting of over 9,000 questions from 33 country-language pairs. The questions in CVQA are written in both the native languages and English, and are categorized into 10 diverse categories.
|
75 |
|
76 |
+
Paper: https://arxiv.org/abs/2406.05967
|
77 |
+
|
78 |
This data is designed for use as a test set. Please [submit your submission here](https://eval.ai/web/challenges/challenge-page/2305/) to evaluate your model performance. CVQA is constructed through a collaborative effort led by a team of researchers from MBZUAI. Read more about CVQA in this paper.
|
79 |
|
80 |
![CVQA statistics](https://cvqa-benchmark.org/images/full-statistics.png)
|
|
|
144 |
|
145 |
---
|
146 |
|
147 |
+
## Bibtex citation
|
148 |
+
|
149 |
+
```bibtex
|
150 |
+
misc{romero2024cvqaculturallydiversemultilingualvisual,
|
151 |
+
title={CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark},
|
152 |
+
author={David Romero and Chenyang Lyu and Haryo Akbarianto Wibowo and Teresa Lynn and Injy Hamed and Aditya Nanda Kishore and Aishik Mandal and Alina Dragonetti and Artem Abzaliev and Atnafu Lambebo Tonja and Bontu Fufa Balcha and Chenxi Whitehouse and Christian Salamea and Dan John Velasco and David Ifeoluwa Adelani and David Le Meur and Emilio Villa-Cueva and Fajri Koto and Fauzan Farooqui and Frederico Belcavello and Ganzorig Batnasan and Gisela Vallejo and Grainne Caulfield and Guido Ivetta and Haiyue Song and Henok Biadglign Ademtew and Hernán Maina and Holy Lovenia and Israel Abebe Azime and Jan Christian Blaise Cruz and Jay Gala and Jiahui Geng and Jesus-German Ortiz-Barajas and Jinheon Baek and Jocelyn Dunstan and Laura Alonso Alemany and Kumaranage Ravindu Yasas Nagasinghe and Luciana Benotti and Luis Fernando D'Haro and Marcelo Viridiano and Marcos Estecha-Garitagoitia and Maria Camila Buitrago Cabrera and Mario Rodríguez-Cantelar and Mélanie Jouitteau and Mihail Mihaylov and Mohamed Fazli Mohamed Imam and Muhammad Farid Adilazuarda and Munkhjargal Gochoo and Munkh-Erdene Otgonbold and Naome Etori and Olivier Niyomugisha and Paula Mónica Silva and Pranjal Chitale and Raj Dabre and Rendi Chevi and Ruochen Zhang and Ryandito Diandaru and Samuel Cahyawijaya and Santiago Góngora and Soyeong Jeong and Sukannya Purkayastha and Tatsuki Kuribayashi and Thanmay Jayakumar and Tiago Timponi Torrent and Toqeer Ehsan and Vladimir Araujo and Yova Kementchedjhieva and Zara Burzo and Zheng Wei Lim and Zheng Xin Yong and Oana Ignat and Joan Nwatu and Rada Mihalcea and Thamar Solorio and Alham Fikri Aji},
|
153 |
+
year={2024},
|
154 |
+
eprint={2406.05967},
|
155 |
+
archivePrefix={arXiv},
|
156 |
+
primaryClass={cs.CV},
|
157 |
+
url={https://arxiv.org/abs/2406.05967},
|
158 |
+
}
|
159 |
+
```
|