Datasets:
adding README file
Browse files
README.md
CHANGED
@@ -30,4 +30,119 @@ configs:
|
|
30 |
path: data/kaa_rus-*
|
31 |
- split: kaa_uzb
|
32 |
path: data/kaa_uzb-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
path: data/kaa_rus-*
|
31 |
- split: kaa_uzb
|
32 |
path: data/kaa_uzb-*
|
33 |
+
language:
|
34 |
+
- en
|
35 |
+
- ru
|
36 |
+
- uz
|
37 |
+
- kaa
|
38 |
+
pretty_name: dilmash
|
39 |
+
size_categories:
|
40 |
+
- 100K<n<1M
|
41 |
+
license: mit
|
42 |
+
task_categories:
|
43 |
+
- translation
|
44 |
+
tags:
|
45 |
+
- dilmash
|
46 |
+
- karakalpak
|
47 |
---
|
48 |
+
# Dilmash: Karakalpak Parallel Corpus
|
49 |
+
|
50 |
+
This repository contains a parallel corpus for the Karakalpak language, developed as part of the research paper "Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak".
|
51 |
+
|
52 |
+
## Dataset Description
|
53 |
+
|
54 |
+
The Karakalpak Parallel Corpus is a collection of 300,000 sentence pairs, designed to support machine translation tasks involving the Karakalpak language. It includes:
|
55 |
+
|
56 |
+
- Uzbek-Karakalpak (100,000 pairs)
|
57 |
+
- Russian-Karakalpak (100,000 pairs)
|
58 |
+
- English-Karakalpak (100,000 pairs)
|
59 |
+
|
60 |
+
## Usage
|
61 |
+
|
62 |
+
This dataset is intended for training and evaluating machine translation models involving the Karakalpak language.
|
63 |
+
|
64 |
+
To load and use dataset, run this script:
|
65 |
+
|
66 |
+
```python
|
67 |
+
from datasets import load_dataset
|
68 |
+
|
69 |
+
dilmash_corpus = load_dataset("tahrirchi/dilmash")
|
70 |
+
```
|
71 |
+
|
72 |
+
## Dataset Structure
|
73 |
+
|
74 |
+
### Data Instances
|
75 |
+
|
76 |
+
- **Size of downloaded dataset files:** 77.4 MB
|
77 |
+
- **Size of the generated dataset:** 46.1 MB
|
78 |
+
- **Total amount of disk used:** 123.5 MB
|
79 |
+
|
80 |
+
An example of 'kaa_eng' looks as follows.
|
81 |
+
```
|
82 |
+
{'src_lang': 'kaa_Latn',
|
83 |
+
'src_sent': 'Pedagogikalıq ideal balaǵa ıktıyatlılıq penen katnasta bolıw principine bárqulla, úlken hám kishi jumıslarda súyeniwdi talan etedi.',
|
84 |
+
'tgt_lang': 'eng_Latn',
|
85 |
+
'tgt_sent': 'The ideal of education demands that the principle of treating children with care be observed at all times, in both big and small matters.'
|
86 |
+
}
|
87 |
+
```
|
88 |
+
|
89 |
+
### Data Fields
|
90 |
+
|
91 |
+
The data fields are the same among all splits.
|
92 |
+
|
93 |
+
- `src_lang`: a `string` feature that contains source language.
|
94 |
+
- `src_sent`: a `string` feature that contains sentence in source language.
|
95 |
+
- `tgt_lang`: a `string` feature that contains target language.
|
96 |
+
- `tgt_sent`: a `string` feature that contains sentence in target language.
|
97 |
+
|
98 |
+
### Data Splits
|
99 |
+
|
100 |
+
| split_name |num_examples|
|
101 |
+
|-----------------|-----------:|
|
102 |
+
| kaa_eng | 100000 |
|
103 |
+
| kaa_rus | 100000 |
|
104 |
+
| kaa_uzb | 100000 |
|
105 |
+
|
106 |
+
## Data Sources
|
107 |
+
|
108 |
+
The corpus comprises diverse parallel texts sourced from multiple domains:
|
109 |
+
|
110 |
+
- 23% sentences from news sources
|
111 |
+
- 34% sentences from books (novels, non-fiction)
|
112 |
+
- 24% sentences from bilingual dictionaries
|
113 |
+
- 19% sentences from school textbooks
|
114 |
+
|
115 |
+
Additionally, 4,000 English-Karakalpak pairs were sourced from the Gatitos Project (Jones et al., 2023)[https://aclanthology.org/2023.emnlp-main.26].
|
116 |
+
|
117 |
+
## Data Preparation
|
118 |
+
|
119 |
+
The data mining process involved local mining techniques, ensuring that parallel sentences were extracted from translations of the same book, document, or article. Sentence alignment was performed using LaBSE (Language-agnostic BERT Sentence Embedding) embeddings.
|
120 |
+
|
121 |
+
## Citation
|
122 |
+
|
123 |
+
If you use this dataset in your research, please cite our paper:
|
124 |
+
|
125 |
+
```bibtex
|
126 |
+
@inproceedings{mamasaidov2024advancing,
|
127 |
+
title={Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak},
|
128 |
+
author={Mamasaidov, Mukhammadsaid and Shopulatov, Abror},
|
129 |
+
booktitle={Proceedings of the OLDI Workshop},
|
130 |
+
year={2024}
|
131 |
+
}
|
132 |
+
```
|
133 |
+
|
134 |
+
## Gratitude
|
135 |
+
|
136 |
+
We are thankful to these awesome organizations and people for helping to make it happen:
|
137 |
+
|
138 |
+
- [David Dalé](https://daviddale.ru): for advise throughout the process
|
139 |
+
- Perizad Najimova: for expertise and assistance with the Karakalpak language
|
140 |
+
- [Nurlan Pirjanov](https://www.linkedin.com/in/nurlan-pirjanov/): for expertise and assistance with the Karakalpak language
|
141 |
+
- [Atabek Murtazaev](https://www.linkedin.com/in/atabek/): for advise throughout the process
|
142 |
+
- Ajiniyaz Nurniyazov: for advise throughout the process
|
143 |
+
|
144 |
+
## Contacts
|
145 |
+
|
146 |
+
We believe that this work will enable and inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular Karakalpak.
|
147 |
+
|
148 |
+
For further development and issues about the dataset, please use [email protected] or [email protected] to contact.
|