Datasets:
Dataset Description
Dataset Summary
Existing works on underrepresented languages mostly focus on machine translation and simple language understanding tasks, such as sentiment analysis and topic classification. More complex tasks such as open-domain dialogue and dialogue summarization, are still left behind for these underrepresented languages which leads to a poor evaluation suite for assessing the capability of large language models (LMs) in these underrepresented languages. Mitigating this limitation, we introduce NusaDialogue, the first underrepresented languages datasets consisting of manually annotated dialogue along with its summary.
NusaDialogue covers 3 underrepresented languages under the Malayo-Polynesian languages group, i.e., Minangkabau (min), Balinese (ban), and Buginese (bug), each consisting of 10,000 dialogue and summary pairs. The dialogues in NusaDialogue encompass numerous topics including culture, occupation, politics, science, history, news, sports, religion, etc. The non-translationese annotation nature makes NusaDialogue suitable for representing the actual day-to-day use of these underrepresented languages. In addition, we ensure that the dataset is annotated by a balanced number of male and female annotators to make the dataset represent a more balanced demography.
How to use the data
To access the data you can use the Hugging Face Python datasets
library.
To load NusaDialogue, simply call datasets.load_dataset()
as shown on the snippet below:
import datasets
min_dataset = datasets.load_dataset("prosa-text/nusa-dialogue", name="min")
ban_dataset = datasets.load_dataset("prosa-text/nusa-dialogue", name="ban", speaker_gender=True)
bug_dataset = datasets.load_dataset("prosa-text/nusa-dialogue", name="bug", speaker_gender=False)
There are 2 important parameters:
name
: the language used min for Minang, ban for Balinese, bug for Buginese.(optional)
speaker_gender
: whether the dialogue should include speaker gender information (defaultTrue
).
Supported Languages
Minangkabau (min) is a language spoken mainly in West Sumatra and other provinces on Sumatra Island such as Bengkulu and Riau. Although it is classified as Malay, it is not intelligible with Indonesian. The word order is SVO written in Latin script. Standard Minangkabau voice can be characterized as an Indonesian-type system whereas colloquial Minangkabau voice is more effectively characterized as a Sundic-type system (Crouch, 2009).
Balinese (ban) is a language spoken mainly in the Bali province and in the West Nusa Tenggara province. It has three main dialects: Highland Balinese, Lowland Balinese, and Nusa Penida. It is mainly written in the Latin script since the early 20th century although it has its own Balinese script. The word order in Balinese is SVO. It is non-tonal and has 17 consonants and 6 vowel phonemes. Stress is on the penultimate syllable. It has three sociolinguistic registers. Regarding patterns of verb affixation, Balinese is an ‘active’ or ‘split-S’ language: verbs with Undergoer-like subject arguments are marked in one way (with a ‘zero prefix’), while verbs with Actor-like subject arguments—intransitive or transitive—are marked in another (either with the nasal prefix ‘N-’, or with ‘ma-’) (Arka, 2003).
Buginese (bug) is a language spoken mainly in the South Sulawesi, Southeast Sulawesi, Central Sulawesi, and West Sulawesi provinces. The word order is SVO. Verb affixes are used to mark persons. It is non-tonal and has 19 consonants and 6 vowel phonemes. Stress is on the penultimate syllable. It was written in the Buginese script in the past (derived from Brahmi script) but is mainly written in Latin script now (Eberhard et al., 2021). In Buginese, the pronoun ‘I’ has three forms: the independent form ‘iyya’, the ergative form ‘-ka’, and the absolutive form/clitic ‘u-’. Buginese employs sentence patterns, pronouns, and certain terms to express politeness (Weda, 2016).
Supported Tasks
Abstractive Dialogue Summarization
Abstractive dialogue summarization task is first introduced by Goo et. al., (2018). The goal of the task is to generate a summary from a given conversation, which is particularly useful for summarizing meetings and other multi-party conversation. NusaDialogue extends the existing abstractive dialogue summarization efforts which is only focused on high-resource languages such as English, to 3 underrepresented languages, while keeping the content culturally relevant through a manual annotation process from native speakers for each language.
Open-domain Dialogue System
Other tasks that are supported in NusaDialogue are the open-domain dialogue system, which was first introduced by Sordoni et. al. (2015). The goal of the task is to generate an appropriate response given the dialogue history as the context. NusaDialogue extends the existing open-domain dialogue system efforts to 3 underrepresented languages. Unlike other multilingual open-domain dialogue dataset such as XPersona (Lin et. al., 2021), we avoid translation from our annotation process to avoid translating text while keeping the content culturally relevant to each language.
Leaderboards
Type | Model | Minangkabau | Buginese | Balinese |
---|---|---|---|---|
Fine-Tuning | IndoBART | 45.27 | 41.87 | 34.38 |
IndoGPT | 12.67 | 14.26 | 12.00 | |
mT5-large | 21.48 | 28.43 | 21.06 | |
Zero-Shot Prompting | Llama-2-13b-chat-hf | 2.97 +- 0.02 | 2.27 +- 0.00 | 2.84 +- 0.00 |
Llama-2-13b-hf | 1.38 +- 0.00 | 0.87 +- 0.00 | 0.74 +- 0.00 | |
Llama-2-7b-chat-hf | 1.40 +- 0.00 | 1.39 +- 0.00 | 2.00 +- 0.00 | |
Llama-2-7b-hf | 1.14 +- 0.00 | 1.43 +- 0.00 | 1.28 +- 0.00 | |
Merak-7B-v1 | 1.20 +- 0.00 | 0.70 +- 0.00 | 0.76 +- 0.00 | |
Mistral-7B-Instruct-v0.1 | 2.05 +- 0.00 | 1.68 +- 0.00 | 1.83 +- 0.00 | |
Mistral-7B-v0.1 | 1.46 +- 0.00 | 0.88 +- 0.00 | 1.39 +- 0.00 | |
Wizard-Vicuna-13B-Uncensored-GPTQ | 0.71 +- 0.00 | 0.73 +- 0.00 | 1.36 +- 0.00 | |
bloom-7b1 | 2.27 +- 0.00 | 1.56 +- 0.00 | 1.95 +- 0.00 | |
bloomz-7b1-mt | 2.03 +- 0.00 | 1.36 +- 0.00 | 1.66 +- 0.00 | |
gpt-3.5-turbo | 10.82 +- 0.33 | 11.54 +- 0.78 | 12.04 +- 0.47 | |
gpt-4 | 13.93 +- 0.00 | 9.33 +- 0.00 | 11.86 +- 0.00 | |
gpt-neo-1.3B | 15.42 +- 0.01 | 22.90 +- 0.01 | 22.60 +- 0.03 | |
zephyr-7b-alpha | 1.31 +- 0.00 | 1.23 +- 0.00 | 2.03 +- 0.00 | |
zephyr-7b-beta | 1.84 +- 0.00 | 0.97 +- 0.00 | 1.91 +- 0.00 | |
Few-Shot Prompting | Llama-2-13b-chat-hf | 4.43 +- 0.00 | 3.96 +- 0.00 | 4.38 +- 0.00 |
Llama-2-13b-hf | 1.01 +- 0.00 | 0.72 +- 0.00 | 0.88 +- 0.00 | |
Llama-2-7b-chat-hf | 1.81 +- 0.00 | 1.57 +- 0.00 | 2.00 +- 0.00 | |
Llama-2-7b-hf | 1.18 +- 0.00 | 3.36 +- 0.00 | 1.56 +- 0.00 | |
Merak-7B-v1 | 1.08 +- 0.00 | 0.74 +- 0.00 | 1.00 +- 0.00 | |
Mistral-7B-Instruct-v0.1 | 3.20 +- 0.00 | 1.39 +- 0.00 | 1.35 +- 0.00 | |
Mistral-7B-v0.1 | 2.25 +- 0.00 | 2.17 +- 0.00 | 2.42 +- 0.01 | |
Wizard-Vicuna-13B-Uncensored-GPTQ | 0.68 +- 0.00 | 0.70 +- 0.00 | 0.74 +- 0.00 | |
bloom-7b1 | 0.41 +- 0.00 | 1.18 +- 0.00 | 1.17 +- 0.00 | |
bloomz-7b1-mt | 1.30 +- 0.00 | 0.49 +- 0.00 | 1.03 +- 0.00 | |
gpt-3.5-turbo | 13.90 +- 0.02 | 14.13 +- 0.21 | 20.14 +- 0.32 | |
gpt-neo-1.3B | 3.88 +- 0.03 | 10.56 +- 0.09 | 7.63 +- 0.05 | |
zephyr-7b-alpha | 2.23 +- 0.00 | 1.31 +- 0.00 | 1.87 +- 0.00 | |
zephyr-7b-beta | 4.29 +- 0.00 | 1.67 +- 0.00 | 2.42 +- 0.00 |
Dataset Structure
Data Fields
Every instance contains the following fields:
annotator_name
: the name of the annotatorannotator_gender
: the gender of the annotator (F for female, M for male)language
: the language code (min for Minang, ban for Balinese, bug for Buginese)dialect
: the dialect informationtopic
: the overarching topic of the dialoguesubtopic
: the subtopic within the main topicdialogue
: the actual dialogue contentsummary
: a summary of the dialoguetype
:argumentation
: an argumentative paragraph is a paragraph that conveys the author's ideas, ideas or opinions accompanied by actual evidence and facts which aims to convince the reader that these ideas and opinions are correct and proven.description
: a descriptive paragraph is a paragraph that describes an object with words that can stimulate the reader's senses so that the reader can see, hear or feel what they are reading.exposition
: an expository paragraph is a paragraph that explains, conveys, teaches, and explains a topic to the reader with the aim of providing information so that it can expand the reader's knowledge.narration
: a narrative paragraph is a paragraph that tells a story or event sequentially and chronologically.persuasion
: a persuasive paragraph is a paragraph that includes evidence of data and facts which aims to persuade and influence the reader to do something according to what is stated in the paragraph.
Example:
{
"annotator_name": "Annotator_1_A",
"annotator_gender": "F",
"language": "bug",
"dialect": "Pinrang",
"topic": "Pekerjaan\n(Occupation)",
"subtopic": "Pedagang\n(trader)",
"dialogue": "Amir: Siaga mu ala pangngalli iye esso'e?\nTeguh: Engka mu sa, tapi dek na maega. Aga biasa nasabari di na tatta pendapatan'e?\nAmir: Engka siarega passalang biasanna sabbari na pengaruhi wasselena paddangkang'e..........,",
"summary": "Waselena paddangkang'e' tak' gattung'i pole jenis usaha, onrong, skala operasi, nenniya faktor laingnge. Engka siarega passalang biasanna sabbari na pengaruhi wasselena paddangkang'e. ..........,",
"type": "exposition"
}
Data Instances & Splits
The data is splitted into three data splits, i.e., training, validation, and test.
Language | Data Split | Num Sample | Num Words |
---|---|---|---|
Minangkabau | Training | 8354 | 2990358 |
Validation | 1000 | 351222 | |
Test | 1000 | 362353 | |
Balinese | Training | 8254 | 2923855 |
Validation | 1000 | 353718 | |
Test | 1000 | 352008 | |
Buginese | Training | 8276 | 2979379 |
Validation | 1000 | 358204 | |
Test | 1000 | 346141 |
Dataset Creation
Topic Coverage
Topics | Minangkabau | Balinese | Buginese |
---|---|---|---|
Activities | 1446 | 1505 | 1538 |
Cultures | 390 | 366 | 480 |
Electronics | 438 | 317 | 336 |
Emotion | 100 | 207 | 0 |
Food And Beverages | 677 | 565 | 493 |
History | 240 | 240 | 320 |
Hobbies | 1356 | 1229 | 1306 |
Leisures | 314 | 355 | 240 |
News | 88 | 0 | 66 |
Occupation | 1166 | 1333 | 1386 |
Politics | 342 | 438 | 472 |
Religion | 324 | 324 | 432 |
Science | 358 | 504 | 514 |
Social Media And Application | 220 | 59 | 165 |
Sports | 1389 | 1264 | 1238 |
Traditional Games | 1232 | 1134 | 1039 |
Transportations | 275 | 415 | 252 |
TOTAL | 10355 | 10255 | 10277 |
Annotations
Annotation process
We conduct dialogue-paragraph writing by instructing the annotators to write a pair of 200-word dialogue and 100-word paragraph given a certain topic. The topic for paragraph writing is manually designed to cover a wide coverage of domains. The following are detailed criteria for writing dialogue-paragraph.
- Dialogue
- Dialogue has two speakers
- Each speaker has 5 conversation turns
- Dialogue should be written based on given topic
- Dialogue consists of a minimum 200 words.
- Paragraph
- Paragraph contains the same content as in the dialogue on the same topic
- Paragraph contains all the important information in the dialogue
- Paragraphs are developed according to the type specified
- Paragraph consists of a minimum 100 words.
The initial data target for each language is 10,000 pairs of dialogue-paragraph, with a total of 3 million words. However, we obtained the final amount of data that exceeded the initial target. The following is the final data amount for each language.
Who are the annotators?
We conduct corpus construction through human annotation by expert annotators. All expert annotators are native speakers of each target language who have gone through a selection process. In the process of developing data in a local language, a competent and experienced team in the required local language is certainly needed. Annotators play a crucial role in compiling high quality local language data. Therefore, strict qualifications are required for the candidate annotators who will be recruited.
The qualifications include educational background and experience related to language. Annotator candidates must have good knowledge of the language and the sentence structure of the local language they are proficient in. Good writing skills are also a plus, considering that the annotator's tasks include creating dialogues and paragraphs. Additionally, annotators are expected to have resilience in working with a large amount of data, so commitment from annotators is also required.
The recruitment process has successfully gathered a total of 462 annotator candidates for 3 different languages. There are 88 candidates for the Balinese language, 174 candidates for the Buginese language, and 200 candidates for the Minangkabau language. Out of a total of 462 applicants, there are 118 candidates, or approximately 25%, who were eligible to participate in the annotation process. Out of that number, only 100 people persevered until the annotation process was completed, while the rest withdrew from the project midway through.
The following is the distribution of dialect diversity from the annotators.
Language | Dialects |
---|---|
Balinese | Badung, Bali, Bali Aga, Bangli, Buleleng, Dataran, Denpasar, Gianyar, Karangasem, Klungkung, Singaraja, Tabanan. |
Buginese | Barru, Bone, Bugis, Bulukumba, Magai Io, Makassar, Maros, Pangkep, Pinrang, Sengkang, Sidenreng Rappang, Sinjai, Soppeng, Wajo. |
Minangkabau | Agam, Bukittinggi, Minangkabau, Padang, Padang Panjang, Pariaman, Pasaman, Payakumbuh, Sijunjung, Tanah Datar. |
Personal and Sensitive Information
In the process of defining topics, there are several topics that have the potential to cause opinion bias among annotators. These topics are usually related to emotions, for instance liking or disliking something. It should be understood that this is the annotator's subjectivity and has nothing to do with the organization's values.
Considerations for Using the Data
Atools, educational materials, and automated summarization systems that can assist in language education and literacy efforts.
Supporting Minority Languages in the Digital Space:
The digital divide is often more pronounced for speakers of minority and underrepresented languages. Including these languages in summarization datasets helps bridge this gap, ensuring that digital platforms and technologies are accessible and beneficial to a broader range of linguistic communities.
Encouraging Linguistic Research
The creation of NusaDialogue can stimulate interest and research in linguistics, language preservation, and computational linguistics. This can lead to a better understanding of the unique linguistic features of these languages and the development of more tailored language technologies.
Addressing Bias and Fairness in NLP
Through NusaDialogue , there is an opportunity to address biases present in NLP systems that are often trained on data from dominant languages. This can contribute to the development of more fair and unbiased language technologies.
In conclusion, the social impact of NusaDialogue extends beyond technology to cultural preservation, education, inclusivity, and cross-cultural understanding. It has the potential to empower communities, promote linguistic diversity, and contribute to a more equitable and interconnected global society.
Potential Bias in NusaDialogue
Figure 1. Topic Distribution per Annotators Gender
With the effort to build a high-quality dataset that is not discriminating against different gender, we conduct annotation in a gender-balanced manner. Despite the efforts, we still observe there are biases over different topics. Biases in the languages under study have not been explored before, and our effort on analyzing the biases on these languages could lead to a less discriminating and more gender equal research in the future. We showcase the topic distribution per annotator gender in Figure 1.
Figure 2. Topic Distribution per Actors Gender for (top) male and (bottom) female annotators
Furthermore, since NusaDialogue consists of a dialogue between two people, we further analyze the choice of actor for each annotator gender and the results are shown in Figure 2. In most topics, each annotator gender tends to use actors of the same gender. Nevertheless, there are different degrees prevalent over different topics. For instance, male annotators use more female actors in conversation relating to transportations and religion, while they use more male actors in conversation relating to history and leisures. Similarly, female annotators tend to use more male actors on conversation relating to traditional games and sports, while they use more female actors on conversation relating to food and beverages and emotion.
Additional Information
Licensing Information
The dataset is released under the terms of CC-BY-SA 4.0. By using this dataset, you are also bound to the respective Terms of Use and License of the dataset. For commercial use in small businesses and startups, please contact us ([email protected]) for permission to use the datasets by informing company profile and propose of usage.
Citation Information
@article{purwarianti2023nusadialogue,
title={NusaDialogue: Dialogue Summarization and Generation for Underrepresented and Extremely Low-Resource Languages},
author={Purwarianti, Ayu and Adhista, Dea and Baptiso, Agung and Mahfuzh, Miftahul and Yusrina Sabila and Cahyawijaya, Samuel and Aji, Alham Fikri},
journal={arXiv preprint arXiv:(coming soon)},
url={https://huggingface.co/datasets/prosa-text/nusa-dialogue},
year={2023}
}
Acknowledgement
This research work is funded and supported by The Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH and FAIR Forward - Artificial Intelligence for all. We thank Direktorat Jenderal Pendidikan Tinggi, Riset, dan Teknologi Kementerian Pendidikan, Kebudayaan, Riset, dan Teknologi (Ditjen DIKTI) for providing the computing resources for this project.
Contact Us
If you have any question please contact our support team at [email protected]
.
- Downloads last month
- 49