Datasets:
Tasks:
Translation
Size:
10M<n<100M
metadata
pipeline_tag: translation
task_categories:
- translation
language:
- en
- ps
- sd
tags:
- english
- pashto
- sindhi
- translation corpus
- en-ps MT
- en-sd MT
- english to pashto biomedical corpus
- english to sindhi biomedical corpus
- translation corpus
- biomedical machine translation
- domain-specific corpus
- domain-adaptation for biomedical research
- low-resource biomedical data
size_categories:
- 10M<n<100M
Biomedical Domain Parallel Corpus for English-Pashto-Sindhi MT
This repository contains biomedical domain data scraped from Wikipedia for the English-Pashto-Sindhi language pair. Sentences are retrieved with predefined threshold with a higher threshold (e.g. t90) reflects a higher degree of parallelism between the sentences
For a more in-depth data crawling methodology, please refer to our paper:
Corpus Details
Total Sentences: 1.6 million
- Threshold-90: 18 sentences
- Threshold-85: 57 sentences
- Threshold-80: 152 sentences
- Threshold-75: 329 sentences
- Threshold-70: 684 sentences
- Threshold-65: 2,892 sentences
- Threshold-60: 4,220 sentences
- Threshold-55: 12,256 sentences
- Threshold-50: 35,726 sentences
Domains Covered: Biomedical Domain.
Usage
These resources are intended to facilitate research and development in the field of Biomedical domain MT. They can be used to train new models or improve existing ones, enabling high-quality domain-specific machine translation between English, Pashto and Sindhi scripts.
Citation
If you use our model, kindly cite our paper:
@inproceedings{firdous-rauf-2023-biomedical,
title = "Biomedical Parallel Sentence Retrieval Using Large Language Models",
author = "Firdous, Sheema and
Rauf, Sadaf Abdul",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.26",
pages = "263--270",
abstract = "We have explored the effect of in domain knowledge during parallel sentence filtering from in domain corpora. Models built with sentences mined from in domain corpora without domain knowledge performed poorly, whereas model performance improved by more than 2.3 BLEU points on average with further domain centric filtering. We have used Large Language Models for selecting similar and domain aligned sentences. Our experiments show the importance of inclusion of domain knowledge in sentence selection methodologies even if the initial comparable corpora are in domain.",
}