metadata
pretty_name: TEDHrlr
paperswithcode_id: null
Dataset Card for "ted_hrlr"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/neulab/word-embeddings-for-nmt
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 1749.12 MB
- Size of the generated dataset: 268.61 MB
- Total amount of disk used: 2017.73 MB
Dataset Summary
Data sets derived from TED talk transcripts for comparing similar language pairs where one is high resource and the other is low resource.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
az_to_en
- Size of downloaded dataset files: 124.94 MB
- Size of the generated dataset: 1.46 MB
- Total amount of disk used: 126.40 MB
An example of 'train' looks as follows.
{
"translation": {
"az": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
aztr_to_en
- Size of downloaded dataset files: 124.94 MB
- Size of the generated dataset: 38.28 MB
- Total amount of disk used: 163.22 MB
An example of 'train' looks as follows.
{
"translation": {
"az_tr": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
be_to_en
- Size of downloaded dataset files: 124.94 MB
- Size of the generated dataset: 1.36 MB
- Total amount of disk used: 126.29 MB
An example of 'train' looks as follows.
{
"translation": {
"be": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
beru_to_en
- Size of downloaded dataset files: 124.94 MB
- Size of the generated dataset: 57.41 MB
- Total amount of disk used: 182.35 MB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"translation": "{\"be_ru\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"en\": \"when i was..."
}
es_to_pt
- Size of downloaded dataset files: 124.94 MB
- Size of the generated dataset: 8.71 MB
- Total amount of disk used: 133.65 MB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"translation": "{\"es\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"pt\": \"when i was 11..."
}
Data Fields
The data fields are the same among all splits.
az_to_en
translation
: a multilingualstring
variable, with possible languages includingaz
,en
.
aztr_to_en
translation
: a multilingualstring
variable, with possible languages includingaz_tr
,en
.
be_to_en
translation
: a multilingualstring
variable, with possible languages includingbe
,en
.
beru_to_en
translation
: a multilingualstring
variable, with possible languages includingbe_ru
,en
.
es_to_pt
translation
: a multilingualstring
variable, with possible languages includinges
,pt
.
Data Splits
name | train | validation | test |
---|---|---|---|
az_to_en | 5947 | 672 | 904 |
aztr_to_en | 188397 | 672 | 904 |
be_to_en | 4510 | 249 | 665 |
beru_to_en | 212615 | 249 | 665 |
es_to_pt | 44939 | 1017 | 1764 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{Ye2018WordEmbeddings,
author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
booktitle = {HLT-NAACL},
year = {2018},
}
Contributions
Thanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset.