Datasets:
File size: 2,785 Bytes
087c7f2 8ce527d 40cdaa1 e3fc83e 40cdaa1 e3488eb 6ee4163 e3488eb faea03a e3488eb 40cdaa1 8ce527d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
task_categories:
- translation
language:
- en
- kn
tags:
- machine-translation
- nllb
- english
- kannada
- parallel-corpus
- multilingual
- low-resource
pretty_name: English-Kannada NLLB Machine Translation Dataset
size_categories:
- 10K<n<100K
---
# English-Kannada NLLB Machine Translation Dataset
This dataset contains English-Kannada parallel text from NLLB dataset along with new NLLB model translations.
## Dataset Structure
- Train: 16,702 examples
- Test: 8,295 examples
- Validation: 4,017 examples
### Features
- `en`: Source English text (from NLLB Dataset)
- `kn`: Human-translated Kannada text (from NLLB Dataset)
- `kn_nllb`: Machine-translated Kannada text using facebook/nllb-200-distilled-600M model
While `kn` translations are available in the NLLB dataset, their quality is poor. Therefore, we created `kn_nllb` by translating the English source text using NLLB's distilled model to obtain cleaner translations.
## Preprocessing
- Filtered: Minimum 5 words in both English and NLLB-translated Kannada texts
- Train-test split: 2:1 ratio
## Sample Dataset
| en | kn | kn_nllb |
|---|---|---|
| The weather is beautiful today. | ಇಂದು ಹವಾಮಾನ ಅದ್ಭುತವಾಗಿದೆ. | ಇಂದು ಹವಾಮಾನ ಸುಂದರವಾಗಿದೆ. |
| I love reading interesting books. | ನಾನು ಆಸಕ್ತಿದಾಯಕ ಪುಸ್ತಕಗಳನ್ನು ಓದಲು ಇಷ್ಟಪಡುತ್ತೇನೆ. | ನಾನು ಆಸಕ್ತಿದಾಯಕ ಪುಸ್ತಕಗಳನ್ನು ಓದಲು ಪ್ರೀತಿಸುತ್ತೇನೆ. |
## Loading the Dataset
### Using Pandas
```python
import pandas as pd
splits = {
'train': 'data/train-00000-of-00001.parquet',
'validation': 'data/validation-00000-of-00001.parquet',
'test': 'data/test-00000-of-00001.parquet'
}
# Load all splits into DataFrames
dataframes = {}
for split, path in splits.items():
dataframes[split] = pd.read_parquet(f"hf://datasets/pavan-naik/mt-nllb-en-kn/{path}")
# Access individual splits
train_data = dataframes['train']
test_data = dataframes['test']
validation_data = dataframes['validation']
```
### Using HuggingFace 🤗 Datasets
```python
from datasets import load_dataset
# Load from HuggingFace Hub
dataset = load_dataset("pavan-naik/mt-nllb-en-kn")
# Access splits
train_data = dataset["train"]
test_data = dataset["test"]
validation_data = dataset["validation"]
```
## Use Cases
- Evaluating NLLB translations for English-Kannada
- Training/fine-tuning MT models
- Analyzing translation quality: NLLB Dataset vs NLLB Model outputs
## Citation
- NLLB Team et al. "No Language Left Behind: Scaling Human-Centered Machine Translation"
- OPUS parallel corpus
## License
Same as NLLB license |