Datasets:
QCRI
/

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
MultiNativQA / README.md
Firoj's picture
added files
e6add61
|
raw
history blame
10.7 kB

MultiNativQA: Multilingual Culturally-Aligned Natural Queries For LLMs

Overview

The MultiNativQA dataset is a multilingual, native, and culturally aligned question-answering resource. It spans 7 languages, ranging from high- to extremely low-resource, and covers 9 different locations/cities. To capture linguistic diversity, the dataset includes several dialects for dialect-rich languages like Arabic. In addition to Modern Standard Arabic (MSA), MultiNativQA features six Arabic dialects — Egyptian, Jordanian, Khaliji, Sudanese, Tunisian, and Yemeni.

The dataset also provides two linguistic variations of Bangla, reflecting differences between speakers in Bangladesh and West Bengal, India. Additionally, MultiNativQA includes English queries from Dhaka and Doha, where English is commonly used as a second language, as well as from New York, USA.

The QA pairs in this dataset cover 18 diverse topics, including: Animals, Business, Clothing, Education, Events, Food & Drinks, General, Geography, Immigration, Language, Literature, Names & Persons, Plants, Religion, Sports & Games, Tradition, Travel, and Weather.

MultiNativQA is designed to evaluate and fine-tune large language models (LLMs) for long-form question answering while assessing their cultural adaptability and understanding.

Directory Structure (JSON files only)

The dataset is organized into directories based on language and region. Each directory contains JSON files for the train, development, and test sets, with the exception of Nepali, which consists of only a test set.

  • arabic_qa/
    • NativQA_ar_msa_qa_dev.json
    • NativQA_ar_msa_qa_test.json
    • NativQA_ar_msa_qa_train.json
  • assamese_in/
    • NativQA_asm_NA_in_dev.json
    • NativQA_asm_NA_in_test.json
    • NativQA_asm_NA_in_train.json
  • bangla_bd/
    • NativQA_bn_scb_bd_dev.json
    • NativQA_bn_scb_bd_test.json
    • NativQA_bn_scb_bd_train.json
  • bangla_in/
    • NativQA_bn_scb_in_dev.json
    • NativQA_bn_scb_in_test.json
    • NativQA_bn_scb_in_train.json
  • english_bd/
    • NativQA_en_NA_bd_dev.json
    • NativQA_en_NA_bd_test.json
    • NativQA_en_NA_bd_train.json
  • english_qa/
    • NativQA_en_NA_qa_dev.json
    • NativQA_en_NA_qa_test.json
    • NativQA_en_NA_qa_train.json
  • hindi_in/
    • NativQA_hi_NA_in_dev.json
    • NativQA_hi_NA_in_test.json
    • NativQA_hi_NA_in_train.json
  • nepali_np/
    • NativQA_ne_NA_np_test.json
  • turkish_tr/
    • NativQA_tr_NA_tr_dev.json
    • NativQA_tr_NA_tr_test.json
    • NativQA_tr_NA_tr_train.json

Example of a data

{
    "data_id": "cf92ec1e52b4b3071d263a1063b43928",
    "category": "immigration",
    "input_query": "How long can you stay in Qatar on a visitors visa?",
    "question": "Can I extend my tourist visa in Qatar?",
    "is_reliable": "very_reliable",
    "answer": "If you would like to extend your visa, you will need to proceed to immigration headquarters in Doha prior to the expiry of your visa and apply there for an extension.",
    "source_answer_url": "https://hayya.qa/en/web/hayya/faq"
}
Field Descriptions:
  • data_id: Unique identifier for each data entry.
  • category: General topic or category of the query (e.g., "health", "religion").
  • input_query: The original user-submitted query.
  • question: The formalized question derived from the input query.
  • is_reliable: Indicates the reliability of the provided answer ("very_reliable", "somewhat_reliable", "unreliable").
  • answer: The system-provided answer to the query.
  • source_answer_url: URL of the source from which the answer was derived.

Statistics

Distribution of the MultiNativQA dataset across different languages.

This dataset consists of two types of data: annotated and un-annotated. We considered the un-annotated data as additional data. Please find the data statistics below:

Statistics of our MultiNativQA dataset including languages with the final annotated QA pairs from different location.

| Language | City | Train | Dev | Test | Total | |----------------|----------- |--------------|-------------|--------------|--------------| | Arabic | Doha | 3,649 | 492 | 988 | 5,129 | | Assamese | Assam | 1,131 | 157 | 545 | 1,833 | | Bangla | Dhaka | 7,018 | 953 | 1,521 | 9,492 | | Bangla | Kolkata | 6,891 | 930 | 2,146 | 9,967 | | English | Dhaka | 4,761 | 656 | 1,113 | 6,530 | | English | Doha | 8,212 | 1,164 | 2,322 | 11,698 | | Hindi | Delhi | 9,288 | 1,286 | 2,745 | 13,319 | | Nepali | Kathmandu | -- | -- | 561 | 561 | | Turkish | Istanbul | 3,527 | 483 | 1,218 | 5,228 | | Total | | 44,477 | 6,121 | 13,159 | 63,757 |

We provide the un-annotated additional data stats below:

Language-Location # of QA
Arabic-Egypt 7,956
Arabic-Palestine 5,679
Arabic-Sudan 4,718
Arabic-Syria 11,288
Arabic-Tunisia 14,789
Arabic-Yemen 4,818
English-New York 6,454
Total 55,702

License

The dataset is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). The full license text can be found in the accompanying licenses_by-nc-sa_4.0_legalcode.txt file.

Contact & Additional Information

For more details, please visit our official website.

Citation

You can access the full paper here.

@article{hasan2024nativqa,
      title={NativQA: Multilingual Culturally-Aligned Natural Query for LLMs},
      author={Hasan, Md Arid and Hasanain, Maram and Ahmad, Fatema and Laskar, Sahinur Rahman and Upadhyay, Sunaya and Sukhadia, Vrunda N and Kutlu, Mucahid and Chowdhury, Shammur Absar and Alam, Firoj},
      journal={arXiv preprint arXiv:2407.09823},
      year={2024}
      publisher={arXiv:2407.09823},
      url={https://arxiv.org/abs/2407.09823},
}