mt-nllb-en-kn / README.md
pavan-naik's picture
update dataset repo
faea03a verified
metadata
task_categories:
  - translation
language:
  - en
  - kn
tags:
  - machine-translation
  - nllb
  - english
  - kannada
  - parallel-corpus
  - multilingual
  - low-resource
pretty_name: English-Kannada NLLB Machine Translation Dataset
size_categories:
  - 10K<n<100K

English-Kannada NLLB Machine Translation Dataset

This dataset contains English-Kannada parallel text from NLLB dataset along with new NLLB model translations.

Dataset Structure

  • Train: 16,702 examples
  • Test: 8,295 examples
  • Validation: 4,017 examples

Features

  • en: Source English text (from NLLB Dataset)
  • kn: Human-translated Kannada text (from NLLB Dataset)
  • kn_nllb: Machine-translated Kannada text using facebook/nllb-200-distilled-600M model

While kn translations are available in the NLLB dataset, their quality is poor. Therefore, we created kn_nllb by translating the English source text using NLLB's distilled model to obtain cleaner translations.

Preprocessing

  • Filtered: Minimum 5 words in both English and NLLB-translated Kannada texts
  • Train-test split: 2:1 ratio

Sample Dataset

en kn kn_nllb
The weather is beautiful today. ಇಂದು ಹವಾಮಾನ ಅದ್ಭುತವಾಗಿದೆ. ಇಂದು ಹವಾಮಾನ ಸುಂದರವಾಗಿದೆ.
I love reading interesting books. ನಾನು ಆಸಕ್ತಿದಾಯಕ ಪುಸ್ತಕಗಳನ್ನು ಓದಲು ಇಷ್ಟಪಡುತ್ತೇನೆ. ನಾನು ಆಸಕ್ತಿದಾಯಕ ಪುಸ್ತಕಗಳನ್ನು ಓದಲು ಪ್ರೀತಿಸುತ್ತೇನೆ.

Loading the Dataset

Using Pandas

import pandas as pd

splits = {
   'train': 'data/train-00000-of-00001.parquet',
   'validation': 'data/validation-00000-of-00001.parquet', 
   'test': 'data/test-00000-of-00001.parquet'
}

# Load all splits into DataFrames
dataframes = {}
for split, path in splits.items():
   dataframes[split] = pd.read_parquet(f"hf://datasets/pavan-naik/mt-nllb-en-kn/{path}")

# Access individual splits
train_data = dataframes['train']
test_data = dataframes['test']
validation_data = dataframes['validation']

Using HuggingFace 🤗 Datasets

from datasets import load_dataset

# Load from HuggingFace Hub
dataset = load_dataset("pavan-naik/mt-nllb-en-kn")

# Access splits
train_data = dataset["train"]
test_data = dataset["test"]
validation_data = dataset["validation"]

Use Cases

  • Evaluating NLLB translations for English-Kannada
  • Training/fine-tuning MT models
  • Analyzing translation quality: NLLB Dataset vs NLLB Model outputs

Citation

  • NLLB Team et al. "No Language Left Behind: Scaling Human-Centered Machine Translation"
  • OPUS parallel corpus

License

Same as NLLB license