ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:20.144051Z"
},
"title": "Exploring Diversity in Back Translation for Low-Resource Machine Translation",
"authors": [
{
"first": "Laurie",
"middle": [],
"last": "Burchell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Cognition, Edinburgh",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Cognition, Edinburgh",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Cognition, Edinburgh",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Back translation is one of the most widely used methods for improving the performance of neural machine translation systems. Recent research has sought to enhance the effectiveness of this method by increasing the 'diversity' of the generated translations. We argue that the definitions and metrics used to quantify 'diversity' in previous work have been insufficient. This work puts forward a more nuanced framework for understanding diversity in training data, splitting it into lexical diversity and syntactic diversity. We present novel metrics for measuring these different aspects of diversity and carry out empirical analysis into the effect of these types of diversity on final neural machine translation model performance for low-resource English\u2194Turkish and mid-resource English\u2194Icelandic. Our findings show that generating back translation using nucleus sampling results in higher final model performance, and that this method of generation has high levels of both lexical and syntactic diversity. We also find evidence that lexical diversity is more important than syntactic for back translation performance.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Back translation is one of the most widely used methods for improving the performance of neural machine translation systems. Recent research has sought to enhance the effectiveness of this method by increasing the 'diversity' of the generated translations. We argue that the definitions and metrics used to quantify 'diversity' in previous work have been insufficient. This work puts forward a more nuanced framework for understanding diversity in training data, splitting it into lexical diversity and syntactic diversity. We present novel metrics for measuring these different aspects of diversity and carry out empirical analysis into the effect of these types of diversity on final neural machine translation model performance for low-resource English\u2194Turkish and mid-resource English\u2194Icelandic. Our findings show that generating back translation using nucleus sampling results in higher final model performance, and that this method of generation has high levels of both lexical and syntactic diversity. We also find evidence that lexical diversity is more important than syntactic for back translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The data augmentation technique of back translation (BT) is used in nearly every current neural machine translation (NMT) system to reach optimal performance (Edunov et al., 2020; Barrault et al., 2020; Akhbardeh et al., 2021, inter alia) . It involves creating a pseudo-parallel dataset by translating target-side monolingual data into the source language using a secondary NMT system (Sennrich et al., 2016) . In this way, it enables the incorporation of monolingual data into the NMT system. Whilst adding data in this way helps nearly all language pairs, it is particularly important for low-resource NMT where parallel data is scarce by definition.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Edunov et al., 2020;",
"ref_id": "BIBREF11"
},
{
"start": 180,
"end": 202,
"text": "Barrault et al., 2020;",
"ref_id": null
},
{
"start": 203,
"end": 238,
"text": "Akhbardeh et al., 2021, inter alia)",
"ref_id": null
},
{
"start": 386,
"end": 409,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because of its ubiquity, there has been extensive research into how to improve BT (Burlot and Yvon, 2018; Hoang et al., 2018; Fadaee and Monz, 2018; Caswell et al., 2019) , especially in ways which increase the 'diversity' of the back-translated dataset (Edunov et al., 2018; Soto et al., 2020) . Previous work (Gimpel et al., 2013; Vanmassenhove et al., 2019) has found that machine translations lack the diversity of human productions. This is because most translation systems use some form of maximum a-posteriori (MAP) estimation, meaning that they will always favour the most probable output. Edunov et al. (2018) and Soto et al. (2020) argue that this makes standard BT data worse training data since it lacks 'richness' or diversity.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Burlot and Yvon, 2018;",
"ref_id": "BIBREF6"
},
{
"start": 106,
"end": 125,
"text": "Hoang et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 126,
"end": 148,
"text": "Fadaee and Monz, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 149,
"end": 170,
"text": "Caswell et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 254,
"end": 275,
"text": "(Edunov et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 276,
"end": 294,
"text": "Soto et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 311,
"end": 332,
"text": "(Gimpel et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 333,
"end": 360,
"text": "Vanmassenhove et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 598,
"end": 618,
"text": "Edunov et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 623,
"end": 641,
"text": "Soto et al. (2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the focus on increasing diversity in BT, what 'diversity' actually means in the context of NMT training data is ill-defined. In fact, Tevet and Berant (2021) point out that there is no standard metric for measuring diversity. Most previous work uses the BLEU score between candidate sentences or another n-gram based metric to estimate similarity (Zhu et al., 2018; Hu et al., 2019; He et al., 2018; Shen et al., 2019; Shu et al., 2019; Holtzman et al., 2020; Thompson and Post, 2020) . However, such metrics mostly measure changes in the vocabulary or spelling. Because of this, they are likely to be less sensitive to other kinds of variety such as changes in structure.",
"cite_spans": [
{
"start": 142,
"end": 165,
"text": "Tevet and Berant (2021)",
"ref_id": "BIBREF47"
},
{
"start": 355,
"end": 373,
"text": "(Zhu et al., 2018;",
"ref_id": "BIBREF54"
},
{
"start": 374,
"end": 390,
"text": "Hu et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 391,
"end": 407,
"text": "He et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 408,
"end": 426,
"text": "Shen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 427,
"end": 444,
"text": "Shu et al., 2019;",
"ref_id": "BIBREF43"
},
{
"start": 445,
"end": 467,
"text": "Holtzman et al., 2020;",
"ref_id": "BIBREF19"
},
{
"start": 468,
"end": 492,
"text": "Thompson and Post, 2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We argue that quantifying 'diversity' using ngram based metrics alone is insufficient. Instead, we split diversity into two aspects: variety in the word choice and spelling, and variety in structure. We call these aspects lexical diversity and syntactic diversity respectively. Here, we follow recent work in natural language generation and particularly paraphrasing (e.g. Iyyer et al., 2018; Krishna et al., 2020; Goyal and Durrett, 2020; Huang and Chang, 2021; Hosking and Lapata, 2021) which explicitly models the meaning and form of the input separately. Of course, there are likely more kinds of diversity than this, but this distinction provides a common-sense framework to extend our understanding of the concept. To our knowledge, no other previous work in data augmentation has attempted to isolate and automatically measure syntactic and lexical diversity.",
"cite_spans": [
{
"start": 373,
"end": 392,
"text": "Iyyer et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 393,
"end": 414,
"text": "Krishna et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 415,
"end": 439,
"text": "Goyal and Durrett, 2020;",
"ref_id": "BIBREF14"
},
{
"start": 440,
"end": 462,
"text": "Huang and Chang, 2021;",
"ref_id": "BIBREF22"
},
{
"start": 463,
"end": 488,
"text": "Hosking and Lapata, 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Building from our definition, we introduce novel metrics aimed at measuring lexical and syntactic diversity separately. We then carry out an empirical study into what effect training data with these two kinds of diversity has on final NMT performance in the context of low-resource machine translation. We do this by creating BT datasets using different generation methods and measuring their diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then evaluate what impact different aspects of diversity have on final model performance. We find that a high level of diversity is beneficial for final NMT performance, though lexical diversity seems more important than syntactic diversity. Importantly though there are limits to both; the data should not be so 'diverse' that it affects the adequacy of the parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarise our contributions as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We put forward a more nuanced definition of 'diversity' in NMT training data, splitting it into lexical diversity and syntactic diversity. We present two novel metrics for measuring these different aspects of diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We carry out empirical analysis into the effect of these types of diversity on final NMT model performance for lowresource English\u2194Turkish and mid-resource English\u2194Icelandic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We find that nucleus sampling is the highestperforming method of generating BT, and it combines both lexical and syntactic diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We make our code publicly available. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We explain each method we use for creating diverse BT datasets in Section 2.1, then discuss our metrics for diversity in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We use four methods to generate diverse BT datasets: beam search, pure sampling, nucleus sampling, and syntax-group fine-tuning. The first three were chosen because they are in common use and so more relevant for future work. The last, syntax-group fine-tuning, aims to increase syntactic diversity specifically and so allows us to separate its effect on final NMT performance from lexical diversity. For each method, we create a diverse BT dataset by generating three candidate translations for each input sentence. This allows us to measure diversity whilst keeping the 'meaning' of the sentence as similar as possible. In this way, we measure inter-sentence diversity as a proxy for the diversity of the dataset as a whole. We discuss our datasets in detail in Section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "Beam search Beam search is the most common search algorithm used to decode in NMT systems. Whilst it is generally successful in finding a highprobability output, the translations it produces tend to lack diversity since it will always default to the most likely alternative in the case of ambiguity . We use beam search to generate three datasets for each language pair, using a beam size of five and no length penalty:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "\u2022 base: three million input sentences used to generate one output per input (BT dataset length: three million)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "\u2022 beam: three million input sentences used to generate three outputs per input (BT dataset length: nine million)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "\u2022 base-big: nine million input sentences used to generate one output per output (BT dataset length: nine million)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "Pure sampling An alternative to beam search is sampling from the model distribution. At each decoding step, we sample from the learned distribution without restriction to generate output. This method means we are likely to generate a much wider range of tokens than restricting our choice to those which are most likely (as in beam search). However, it also means that the generated text is less likely to be adequate (have the same meaning as the input) as the output space does not necessary restrict itself to choices which best reflect the meaning of the input. In other words, the output may be diverse, but it may not be the kind of diversity that we want for NMT training data. We create one dataset per language pair (sampling) by generating three candidate translations for each of the three million monolingual input sentences. This results in nine-million line BT dataset. We set our beam size to five when generating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "Nucleus sampling Nucleus or top-p sampling is another sampling-based method, introduced by Holtzman et al. (2020) . Unlike pure sampling, which samples from the entire distribution, top-p sampling only samples from the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The intuition is that when only a small number of tokens are likely, we want to limit our sampling space to those. However, when there are many likely hypotheses, we want to widen the number of tokens we might sample from. We chose this method in the hope it represents a middle ground between high-probability but repetitive beam search generations, and more diverse but potentially low-adequacy pure sampling generation. We create one dataset per language pair (nucleus) by generating three hypothesis translations for each of the three million monolingual input sentences. Each dataset is therefore nine million lines long. We set the beam size to five and p to 0.95.",
"cite_spans": [
{
"start": 91,
"end": 113,
"text": "Holtzman et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "Syntax-group fine-tuning For our analysis in this paper, we want to generate diverse BT in a way which focuses on syntactic diversity over lexical diversity, so that we can separate out its effect on final NMT performance. We therefore take a fine-tuning approach for our final generation method. To do this, we generate the dependency parse of each sentence in the English side of the parallel data for each language pair using the Stanford neural network dependency parser (Chen and Manning, 2014) . We then label each pair of parallel sentences in the training data according to the first split in the corresponding syntactic parse tree. We then create three fine-tuning training datasets out of the three biggest syntactic groups. 2 Finally, we take NMT models trained on parallel data alone and restart training on each syntactic-group dataset, resulting in three NMT systems which are fine-tuned to produce a particular syntactic structure. We are only able to create models this way which translate into English, as good syntactic parsers are not available for the other languages in our study.",
"cite_spans": [
{
"start": 475,
"end": 499,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "To verify this method works as expected, we translated the test set for each language pair with the model trained on parallel data only. We then 2 For English-Turkish, we combine the third and fourth largest syntactic groups to create the third fine-tuning dataset, as the third-largest syntactic group alone was not large enough for successful fine-tuning. The count of the top-ten syntactic groups produced by the parallel-only Turkish\u2192English NMT model compared to the number of those productions produced by a Turkish\u2192English NMT model fine-tuned on the second-most common syntactic group (S -> PP NP VP .). The fine-tuned model produces more examples of the required syntactic group. Input data is the combined WMT test sets. translated the same test set with each fine-tuned model and checked it was producing more of the required syntactic group. We did indeed find that fine-tuning resulted in more candidate sentences from the required group. Figure 1 gives an example of the different pattern of productions between the parallel-only model and a model fine-tuned on a particular syntactic group (S -> PP NP VP .)",
"cite_spans": [],
"ref_spans": [
{
"start": 952,
"end": 960,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "S \u2192 N P V P . S \u2192 P P , N P V P . S \u2192 \" S , \" N P V P . S \u2192 S , C C S . S \u2192 A D V P , N P V P . S \u2192 S B A R , N P V P . S \u2192 N P A D V P V P . S \u2192 N P V P S \u2192 S , N P V P . S \u2192 N P V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating diverse back translation",
"sec_num": "2.1"
},
{
"text": "We use three primary metrics to measure lexical and syntactic diversity: i-BLEU, i-chrF, and tree kernel difference. As mentioned in Section 2.1, we generate three output sentences for each input to our BT systems and measure inter-sentence diversity as a proxy for the diversity produced by the system. Due to compute time, we calculate all inter-sentence metrics over a sample of 30,000 sentence groups rather than the whole BT dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "2.2"
},
{
"text": "i-BLEU Following previous work, we calculate the BLEU score between all sentence pairs generated from the same input (Papineni et al., 2002) , take the mean and then subtract it from one to give inter-sentence or i-BLEU (Zhu et al., 2018) . We believe that lexical diversity as we define it is the main driver of this metric, since BLEU scores are calculated based on n-gram overlap and so the biggest changes to the score will result from changes to the words used (though changes in ordering of words and their morphology will also have an effect). The higher the i-BLEU score, the higher the diversity of output. i-chrF Building from i-BLEU, we introduce i-chrF, which is generated in the same way as i-BLEU but using chrF (Popovi\u0107, 2015) . Since chrF is also based on n-gram overlap, we believe it will also mostly measure lexical diversity. However, i-chrF is based on character rather than word overlap, and so should be less affected by morphological changes to the form of words than i-BLEU. We calculate both chrF and BLEU scores using the sacreBLEU toolkit (Post, 2018) .",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
},
{
"start": 220,
"end": 238,
"text": "(Zhu et al., 2018)",
"ref_id": "BIBREF54"
},
{
"start": 726,
"end": 741,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF35"
},
{
"start": 1067,
"end": 1079,
"text": "(Post, 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "2.2"
},
{
"text": "Tree kernel difference We propose a novel metric which focuses on syntactic diversity: mean tree kernel difference. To calculate it, we first generate the dependency parse of each candidate sentence using the Stanford neural network dependency parser (Chen and Manning, 2014) . We replace all terminals with a dummy token to minimise the effect of lexical differences, then we calculate the tree kernel for each pair of parses using code from Conklin et al. (2021) , which is in turn based on Moschitti (2006) . Finally, we calculate the mean across all pairs to give the mean tree kernel difference for each set of generated sentences.",
"cite_spans": [
{
"start": 251,
"end": 275,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 443,
"end": 464,
"text": "Conklin et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 493,
"end": 509,
"text": "Moschitti (2006)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "2.2"
},
{
"text": "We are only able to calculate the tree kernel metric for the English datasets due to the lack of reliable parsers in Turkish and Icelandic, though this method could extend to any language with a reasonable parser available. The higher the score, the higher the diversity of the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "2.2"
},
{
"text": "We calculate mean word length, mean sentence length, and vocabulary size over the entire generated dataset as summary statistics. We use the definition of 'word' as understood by the bash wc command to calculate all metrics, since we are only interested in a rough measure to check for degenerate results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary statistics",
"sec_num": null
},
{
"text": "Having discussed the methods by which we generate diverse BT datasets and the metrics with which we measure the diversity in these datasets, we now outline our experimental set up for testing the effect of training data diversity on final NMT model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We carry out our experiments on two language pairs: low-resource Turkish-English and midresource Icelandic-English. These languages are sufficiently low-resource that augmenting the training data will likely be beneficial, but wellresourced enough that we can still train a reasonable back-translation model on the available parallel data alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3.1"
},
{
"text": "Data provenance The Turkish-English parallel data is from the WMT 2018 news translation task (Bojar et al., 2018) . The training data is from the SETIMES dataset, a parallel dataset of news articles in Balkan languages (Tiedemann, 2012) . We use the development set from WMT 2016 and the test sets from WMT 2016-18.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Bojar et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 219,
"end": 236,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3.1"
},
{
"text": "The Icelandic-English parallel data is from the WMT 2021 news translation task (Akhbardeh et al., 2021) . There are four sources of training data: ParIce (Barkarson and Steingr\u00edmsson, 2019) , filtered as described in J\u00f3nsson et al. 2020; Paracrawl (Ba\u00f1\u00f3n et al., 2020) ; WikiMatrix (Schwenk et al., 2021) ; and WikiTitles 3 . We use the development and test sets provided for WMT 2021.",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "(Akhbardeh et al., 2021)",
"ref_id": null
},
{
"start": 154,
"end": 189,
"text": "(Barkarson and Steingr\u00edmsson, 2019)",
"ref_id": "BIBREF2"
},
{
"start": 248,
"end": 268,
"text": "(Ba\u00f1\u00f3n et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 282,
"end": 304,
"text": "(Schwenk et al., 2021)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3.1"
},
{
"text": "The English monolingual data is made up of news crawl data from 2016 to 2020, version 16 of news-commentary crawl, 4 and crawled news discussions from 2012 to 2019. 5 The Turkish monolingual data is news crawl data from 2016 to 2020. 6 The Icelandic monolingual data is made up of news crawl data from 2020, and part one of the Icelandic Gigaword dataset (Steingr\u00edmsson et al., 2018) .",
"cite_spans": [
{
"start": 234,
"end": 235,
"text": "6",
"ref_id": null
},
{
"start": 355,
"end": 383,
"text": "(Steingr\u00edmsson et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3.1"
},
{
"text": "Data cleaning Our cleaning scripts are adapted from those provided by the Bergamot project. 7 The full data preparation procedure is provided in the repo accompanying this paper. After cleaning, the Turkish-English parallel dataset contains 202 thousand lines and the Icelandic-English parallel dataset contains 3.97 million lines. The English, Icelandic, and Turkish cleaned monolingual datasets contain 487 million, 39.9 million, and 26.1 million lines respectively. We select 9 million lines of each monolingual dataset for BT at random since all the monolingual datasets are the same domain as the test sets. Text pre-processing We learn a joint BPE model with SentencePiece using the concatenated training data for each language pair (Kudo and Richardson, 2018) . We set vocabulary size to 16,000 and character coverage to 1.0. All other settings are default. We apply this model to the training, development, and test data. We remove the BPE segmentation before calculating any metrics.",
"cite_spans": [
{
"start": 92,
"end": 93,
"text": "7",
"ref_id": null
},
{
"start": 739,
"end": 766,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3.1"
},
{
"text": "Model architecture and infrastructure All NMT models in this paper are transformer models (Vaswani et al., 2017) . We give full details about hyper-parameters and infrastructure in Appendix A.2.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model training",
"sec_num": "3.2"
},
{
"text": "Parallel-only models for back translation For each language pair and in both directions, we train an NMT model on the cleaned parallel data alone using the relevant hyper-parameter settings in Table 5 . We measure the performance of these models by calculating the BLEU score (Papineni et al., 2002) using the sacreBLEU toolkit (Post, 2018) 8 and by evaluating the translations with COMET using the wmt20-comet-da model (Rei et al., 2020) .",
"cite_spans": [
{
"start": 276,
"end": 299,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
},
{
"start": 328,
"end": 342,
"text": "(Post, 2018) 8",
"ref_id": null
},
{
"start": 420,
"end": 438,
"text": "(Rei et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model training",
"sec_num": "3.2"
},
{
"text": "Generating back translation For each language pair and in each direction, we use the trained parallel-only models to generate back translation datasets as described in Section 2.1. We translate the same three million sentences of monolingual data each time for consistency, translating an additional six million lines of monolingual data for the base-big dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model training",
"sec_num": "3.2"
},
{
"text": "Training final models We train final models for each language direction on the concatenation of the parallel data and each back-translation dataset (back-translation on the source side, original monolingual data as target). We measure the final performance of these models using BLEU and COMET as before. Figures 2 and 3 show the mean BLEU and COMET scores achieved by the final models trained on the concatenation of the parallel data and the different BT datasets. In most cases, adding any BT data to the training data results in some improvement over the parallel-only baseline for both scores. However, augmenting the training data with BT produced with nucleus sampling nearly always results in the strongest performance, with mean gains of 2.88 BLEU or 0.078 COMET. This compares to mean gains of 2.24 BLEU or 0.026 COMET when using the baseline BT dataset of three million lines translated with beam search. Pure sampling tends to perform similarly but not quite as well as nucleus sampling. Based on this result, we suggest that future work generate BT with nucleus sampling rather than pure sampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 320,
"text": "Figures 2 and 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model training",
"sec_num": "3.2"
},
{
"text": "We give the diversity metrics for each language pair and each generated dataset in Tables 1 to 4. 9 Sentence and word lengths are comparable across the same language for all generation methods, suggesting that each method is generating tokens from roughly the right language distribution. However, the vocabulary size is much larger for nucleus compared to base or beam, and sampling is around twice that of nucleus. Examining the data, we find many neologisms (that is, 'words' which do not appear in the training data) for nucleus and more still for sampling. We note that the syntax-groups dataset has a much smaller vocabulary again; this is what we would hope if the generation method is producing syntactic rather than lexical diversity as required. We give representative examples of generated triples in Appendix A.1, along with some explanation of how the phenomena they demonstrate fit into the general trend of the dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 99,
"text": "Tables 1 to 4. 9",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "4.2"
},
{
"text": "Effect on performance With respect to the intersentence diversity metrics (i-BLEU, i-chrF, and tree kernel scores), we see that the sampling dataset has the highest diversity scores, followed by nucleus, then syntax, then beam. Taken together with the performance scores and the summary statistics, this suggests that NMT data benefits from a high level of diversity, but not so high that the two halves of the parallel data no longer have the same meaning (as shown by the very high vocabulary size for sampling).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "4.2"
},
{
"text": "Metric correlation There is a high correlation between i-BLEU, i-chrF, and tree kernel score for the beam, sampling, and nucleus datasets. This is not entirely unexpected: it is likely to be difficult if not impossible to disentangle lexical and syntactic diversity, since changing sentence structure would also affect the word choice and vice versa. This correlation is much weaker for the syntaxgroups dataset: whilst the tree-kernel scores are comparable to the sampling and nucleus datasets, there is a much smaller increase in the other (lexical) diversity scores. This suggests that this generation method encourages relatively more syntactic variation than lexical compared to the other diverse generation method, as was its original aim (see paragraph on syntax-group fine-tuning in section 2.1). The fact that the final model trained on this BT dataset has lower performance compared to other forms of diversity suggests that lexical diversity is more important than syntactic diversity when undertaking data augmentation. We leave it to future work to investigate this hypothesis further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity metrics",
"sec_num": "4.2"
},
{
"text": "The right-most cross in each quadrant of Figures 2 and 3 gives the performance of base-big, the dataset where we simply add six million more lines of new data rather than carrying out data augmentation. Interestingly, pure and nucleus sampling both often outperform base-big. This may be because the model over-fits to too much back-translated data, whereas having multiple sufficiently-diverse pseudo-source sentences for each target sentence has a regularising effect on the model. To further support this hypothesis, Figure 4 gives training perplexity for the first 50,000 steps of training for the final Icelandic\u2192English models, which are representative of the results for the other language pairs. We see that the base-big dataset has the lowest training perplexity at each step, suggesting this data is easier to model. Conversely, the model has highest training perplexity on the sampling and nucleus datasets, suggesting generating the data this way has a regularising effect. Step 10 0 6 \u00d7 10 \u22121 2 \u00d7 10 0 3 \u00d7 10 0",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 57,
"text": "Figures 2 and 3",
"ref_id": null
},
{
"start": 521,
"end": 529,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data augmentation versus more monolingual data",
"sec_num": "4.3"
},
{
"text": "Training perplexity for final English to Turkish models sampling nucleus beam syntax-groups base base-big Figure 4 : Mean training perplexity for the first 50 thousand steps of training for final English\u2192Turkish models. The model has highest training perplexity on the sampling then nucleus datasets. The lowest training perplexity is on the beam and base-big datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training perplexity",
"sec_num": null
},
{
"text": "Several studies have found that back-translated text is easier to translate than forward-translated text, and so inflates intrinsic metrics like BLEU (Edunov et al., 2020; Roberts et al., 2020) . To use a concrete example, the WMT test sets for English to Turkish are made up of half native English translated into Turkish, and half native Turkish translated into English. We want models that perform well when translating from native text (in this example: the native English side), as this is the usual direction of translation. However, half the test set is made up of translations on the source side. The translationese effect means that the model will usually get higher scores on this half of the test set, potentially inflating the score. Consequently, the intrinsic metrics could suggest choosing a model that does not actually perform well on the desired task (translating from native text).",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Edunov et al., 2020;",
"ref_id": "BIBREF11"
},
{
"start": 172,
"end": 193,
"text": "Roberts et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translationese effect",
"sec_num": "4.4"
},
{
"text": "We investigate this effect in our own work by examining the mean BLEU scores for each model on each half of the test sets, giving the results in Figure 5 . Each bar indicates the mean percentage change in BLEU scores over the parallel-only baseline model for the models trained on the different BT datasets, so a larger bar means a better performing model. The left-hand bars in each quadrant show the performance of each model on the back-translated half of the test set (to native) and the right-hand bars give the performance of each model on the forward-translated half of the test set (from native).",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Translationese effect",
"sec_num": "4.4"
},
{
"text": "We see a significant translationese effect for all models, as the percentage change in scores over the baseline are much higher when the models translate already translated text (the left-hand side bars are higher than the right-hand ones). However, it appears that the nucleus dataset is less affected by the translationese effect than the other datasets, since it shows less of a decline in performance when translating native text. This may be due to a similar regularising effect as discussed previously, as it is more difficult for the model to overfit to BT data when it is generated with nucleus sampling. A direction for future research is how to obtain the benefits of using monolingual data (as BT does) without exacerbating the translationese effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translationese effect",
"sec_num": "4.4"
},
{
"text": "Improving back translation The original paper introducing BT by Sennrich et al. (2016) found that using a higher-quality NMT system for BT led to higher BLEU scores in the final trained system. This finding was corroborated by Burlot and Yvon (2018) , and following work has investigated further ways to improve NMT. These include iterative BT (Hoang et al., 2018) , targeting difficult words (Fadaee and Monz, 2018) , and tagged BT (Caswell et al., 2019) . Section 3.2.1 of Haddow et al. (2021) presents a comprehensive survey of BT and its variants as applied to low-resource NMT.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "Sennrich et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 227,
"end": 249,
"text": "Burlot and Yvon (2018)",
"ref_id": "BIBREF6"
},
{
"start": 341,
"end": 364,
"text": "BT (Hoang et al., 2018)",
"ref_id": null
},
{
"start": 393,
"end": 416,
"text": "(Fadaee and Monz, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 433,
"end": 455,
"text": "(Caswell et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 475,
"end": 495,
"text": "Haddow et al. (2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Diversity in machine translation Most of the work on the lack of diversity in machine-translated text are in the context of automatic evaluation (Edunov et al., 2020; Roberts et al., 2020) . As for diversity in BT specifically, Edunov et al. (2018) argue that MAP prediction, as is typically used to generate BT through beam search, leads to overly-regular synthetic source sentences which do not cover the true data distribution. They propose instead generating BT with sampling or noised beam outputs, and find model performance increases for all but the lowest resource scenarios. Alternatively, Soto et al. (2020) generate diverse BT by training multiple machine-translation systems with varying architectures.",
"cite_spans": [
{
"start": 145,
"end": 166,
"text": "(Edunov et al., 2020;",
"ref_id": "BIBREF11"
},
{
"start": 167,
"end": 188,
"text": "Roberts et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 228,
"end": 248,
"text": "Edunov et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 599,
"end": 617,
"text": "Soto et al. (2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Generating diversity Increasing diversity in BT is part of the broader field of diverse generation, by which we mean methods to vary the surface form of a production whilst keeping the meaning as similar as possible. Aside from generating diverse translations (Gimpel et al., 2013; He et al., 2018; Shen et al., 2019; Nguyen et al., 2020; Li et al., 2021) , it is also used in question answering systems (Sultan et al., 2020), visually-grounded generation (Vi-jayakumar et al., 2018) , conversation models (Li et al., 2016) , and particularly paraphrasing (Mallinson et al., 2017; Hu et al., 2019; Thompson and Post, 2020; Goyal and Durrett, 2020; Krishna et al., 2020) . Some recent work such as Iyyer et al. (2018) , Huang and Chang (2021), and Hosking and Lapata (2021) explicitly model the meaning and the form of the input separately. In this way, they aim to vary the syntax of the output whilst preserving the semantics so as to generate more diverse paraphrases. Unfortunately, these methods are difficult to apply to a low-resource scenario as they require external resources (e.g. accurate syntactic parsers, large-scale paraphrase data) which are not available for most of the world's languages.",
"cite_spans": [
{
"start": 260,
"end": 281,
"text": "(Gimpel et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 282,
"end": 298,
"text": "He et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 299,
"end": 317,
"text": "Shen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 318,
"end": 338,
"text": "Nguyen et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 339,
"end": 355,
"text": "Li et al., 2021)",
"ref_id": "BIBREF27"
},
{
"start": 456,
"end": 483,
"text": "(Vi-jayakumar et al., 2018)",
"ref_id": null
},
{
"start": 506,
"end": 523,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 556,
"end": 580,
"text": "(Mallinson et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 581,
"end": 597,
"text": "Hu et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 598,
"end": 622,
"text": "Thompson and Post, 2020;",
"ref_id": "BIBREF48"
},
{
"start": 623,
"end": 647,
"text": "Goyal and Durrett, 2020;",
"ref_id": "BIBREF14"
},
{
"start": 648,
"end": 669,
"text": "Krishna et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 697,
"end": 716,
"text": "Iyyer et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "In this paper, we introduced a two-part framework for understanding diversity in NMT data, splitting it into lexical diversity and syntactic diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our empirical analysis suggests that whilst high amounts of both types of diversity are important in training data, lexical diversity may be more beneficial than syntactic. In addition, achieving high diversity in BT should not be at the expense of ad-equacy. We find that generating BT with nucleus sampling results in the highest final NMT model performance for our systems. Future work could investigate further the affect of high lexical diversity on BT independent of syntactic diversity. School of Informatics and School of Philosophy, Psychology & Language Sciences. It was also supported by funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825299 (GoURMET) and funding from the UK Engineering and Physical Sciences Research Council (EPSRC) fellowship grant EP/S001271/1 (MTStretch). The experiments in this paper were performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Finally, the authors would like to thank our anonymous reviewers for their time and helpful comments, and we give special thanks to Henry Conklin and Bailin Wang for their help with tree kernels and many useful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "github.com/laurieburchell/ exploring-diversity-bt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "data.statmt.org/wikititles/v3 4 data.statmt.org/news-commentary/v16 5 data.statmt.org/news-discussions/en 6 data.statmt.org/news-crawl 7 github.com/browsermt/students/tree/ master/train-student",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We omit base for reasons of space and because its different length to the other datasets makes comparison difficult (3 million lines compared to 9 million for the others).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "A.1 Representative examples from back-translated datasets (translated from Icelandic)Original \u00dej\u00f3\u00f0verjar hafa teki\u00f0 forrae\u00f0i\u00f0 og stefnt er a\u00f0 stofnun st\u00f3rr\u00edkis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 The Germans have taken custody and are aimed at the establishment of a large state.\u2022 The Germans have taken custody and are aimed at the creation of a large state.\u2022 The Germans have taken custody and are aimed at establishing a large state.Comment: Only one or two words differ between sentences (underlined).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam",
"sec_num": null
},
{
"text": "\u2022 The Germz governmentregluru has committed suicide, intending to organise a major state.\u2022 The Germano had ensured that British commanders in France would be aides of theaerd rapidly.\u2022 And the need to defend and establish theseUCtions are all organized intomissions from Iraq\u00e9ttihe.Comment: Sentences show large variation in structure and vocabulary, but they contain many non-dictionary words (underlined) and adequacy is low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": null
},
{
"text": "\u2022 Germany has taken custody and aimed to establish a large country.\u2022 The German government initiated a group operation, to establish capital city.\u2022 The Germany has managed to make an example of their full widowed demands.Comment: There is a moderate amount of variation between sentences in terms of syntax and vocabulary, but no non-dictionary words. Some phrases lack adequacy (underlined).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nucleus",
"sec_num": null
},
{
"text": "\u2022 The Germans have taken custody and are aimed at the establishment of a large state.\u2022 The Icelandic Institute of Natural History\u2022 As a result, the Germans have taken control of the country and are aimed at establishing a large state.Comment: The second and third sentences contain hallucinations, presumably in order to generate according to the syntactic templates (underlined).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-groups",
"sec_num": null
},
{
"text": "All NMT models in this paper are transformer models (Vaswani et al., 2017) . We conducted a hyperparameter search for each language pair, training English\u2194Turkish and English\u2194Icelandic NMT models and using the BLEU score as the optimisation metric. We give the settings which differ to transformer-base in Table 5 . We use the same hyper-parameter settings for all models trained for the same language pair.We use the Fairseq toolkit to train all our NMT models . We train on four NVIDIA A100-SXM-80GB GPUs and use CUDA 11.1 plus a Python 3.8 Conda environment provided in the Github repo. We generate on one GPU, since to our knowledge the Fairseq toolkit does not support multi-GPU decoding. We use Weights and Biases for experiment tracking (Biewald, 2020 ",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF51"
},
{
"start": 744,
"end": 758,
"text": "(Biewald, 2020",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Model architecture and infrastructure",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation",
"authors": [
{
"first": "Farhad",
"middle": [],
"last": "Akhbardeh",
"suffix": ""
},
{
"first": "Arkady",
"middle": [],
"last": "Arkhangorodsky",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Biesialska",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Jussa",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Leonie",
"middle": [],
"last": "Harter",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Homan",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Kwabena",
"middle": [],
"last": "Amponsah-Kaakyire",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farhad Akhbardeh, Arkady Arkhangorodsky, Mag- dalena Biesialska, Ond\u0159ej Bojar, Rajen Chatter- jee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina Espa\u00f1a-Bonet, Angela Fan, Christian Fe- dermann, Markus Freitag, Yvette Graham, Ro- man Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Phil- ipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Au- guste Tapo, Marco Turchi, Valentin Vydrin, and Mar- cos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, pages 1-88, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ParaCrawl: Web-scale acquisition of parallel corpora",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Ba\u00f1\u00f3n",
"suffix": ""
},
{
"first": "Pinzhen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Miquel",
"middle": [],
"last": "Espl\u00e0-Gomis",
"suffix": ""
},
{
"first": "Mikel",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "Faheem",
"middle": [],
"last": "Kirefu",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Sergio",
"middle": [
"Ortiz"
],
"last": "Rojas",
"suffix": ""
},
{
"first": "Leopoldo",
"middle": [
"Pla"
],
"last": "Sempere",
"suffix": ""
},
{
"first": "Gema",
"middle": [],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4555--4567",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.417"
]
},
"num": null,
"urls": [],
"raw_text": "Marta Ba\u00f1\u00f3n, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Espl\u00e0-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ram\u00edrez-S\u00e1nchez, Elsa Sarr\u00edas, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale ac- quisition of parallel corpora. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4555-4567, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Compiling and filtering ParIce: An English-Icelandic parallel corpus",
"authors": [
{
"first": "Starka\u00f0ur",
"middle": [],
"last": "Barkarson",
"suffix": ""
},
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "140--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starka\u00f0ur Barkarson and Stein\u00fe\u00f3r Steingr\u00edmsson. 2019. Compiling and filtering ParIce: An English-Icelandic parallel corpus. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 140- 145, Turku, Finland. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Biesialska",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Mat- thias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine trans- lation (WMT20). In Proceedings of the Fifth Con- ference on Machine Translation, pages 1-55, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Experiment tracking with weights and biases. Software available from wandb",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Biewald",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of the 2018 conference on machine translation (WMT18)",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "272--303",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6401"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on ma- chine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using monolingual data in neural machine translation: a systematic study",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Burlot",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "144--155",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6315"
]
},
"num": null,
"urls": [],
"raw_text": "Franck Burlot and Fran\u00e7ois Yvon. 2018. Using monolin- gual data in neural machine translation: a systematic study. In Proceedings of the Third Conference on Ma- chine Translation: Research Papers, pages 144-155, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Tagged back-translation",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "53--63",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5206"
]
},
"num": null,
"urls": [],
"raw_text": "Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53-63, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Meta-learning to compositionally generalize",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Conklin",
"suffix": ""
},
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kenny",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "3322--3335",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.258"
]
},
"num": null,
"urls": [],
"raw_text": "Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. 2021. Meta-learning to compositionally gen- eralize. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 3322-3335, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On the evaluation of machine translation systems trained with back-translation",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2836--2846",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.253"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, and Michael Auli. 2020. On the evaluation of machine translation systems trained with back-translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2836- 2846, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Backtranslation sampling by targeting difficult words in neural machine translation",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Fadaee",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "436--446",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1040"
]
},
"num": null,
"urls": [],
"raw_text": "Marzieh Fadaee and Christof Monz. 2018. Back- translation sampling by targeting difficult words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 436-446, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A systematic exploration of diversity in machine translation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Shakhnarovich",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1100--1111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1100-1111, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural syntactic preordering for controlled paraphrase generation",
"authors": [
{
"first": "Tanya",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "238--252",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.22"
]
},
"num": null,
"urls": [],
"raw_text": "Tanya Goyal and Greg Durrett. 2020. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, pages 238-252, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Assessing human-parity in machine translation on the segment level",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Eskevich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4199--4207",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.375"
]
},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Christian Federmann, Maria Eskevich, and Barry Haddow. 2020. Assessing human-parity in machine translation on the segment level. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 4199-4207, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Survey of low-resource machine translation",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Bawden",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Jindrich",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindrich Helcl, and Alexandra Birch. 2021. Survey of low-resource machine translation. CoRR, abs/2109.00486.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sequence to sequence mixture model for diverse machine translation",
"authors": [
{
"first": "Xuanli",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "583--592",
"other_ids": {
"DOI": [
"10.18653/v1/K18-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Xuanli He, Gholamreza Haffari, and Mohammad Nor- ouzi. 2018. Sequence to sequence mixture model for diverse machine translation. In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 583-592, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2703"
]
},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learning Representations.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Factorising meaning and form for intent-preserving paraphrasing",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Hosking",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1405--1418",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.112"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Hosking and Mirella Lapata. 2021. Factorising meaning and form for intent-preserving paraphrasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1405-1418, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Parabank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6521--6528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Edward Hu, Rachel Rudinger, Matt Post, and Ben- jamin Van Durme. 2019. Parabank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6521-6528.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generating syntactically controlled paraphrases without using annotated parallel pairs",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Kuan",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1022--1033",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.88"
]
},
"num": null,
"urls": [],
"raw_text": "Kuan-Hao Huang and Kai-Wei Chang. 2021. Gener- ating syntactically controlled paraphrases without using annotated parallel pairs. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 1022-1033, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1875--1885",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1170"
]
},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Or- leans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Experimenting with different machine translation models in medium-resource settings",
"authors": [
{
"first": "Haukur",
"middle": [],
"last": "Haukur P\u00e1ll J\u00f3nsson",
"suffix": ""
},
{
"first": "V\u00e9steinn",
"middle": [],
"last": "Barri S\u00edmonarson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Snaebjarnarson",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Text, Speech, and Dialogue",
"volume": "",
"issue": "",
"pages": "95--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haukur P\u00e1ll J\u00f3nsson, Haukur Barri S\u00edmonarson, V\u00e9steinn Snaebjarnarson, Stein\u00fe\u00f3r Steingr\u00edmsson, and Hrafn Loftsson. 2020. Experimenting with differ- ent machine translation models in medium-resource settings. In International Conference on Text, Speech, and Dialogue, pages 95-103. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reformulating unsupervised style transfer as paraphrase generation",
"authors": [
{
"first": "Kalpesh",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "737--762",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.55"
]
},
"num": null,
"urls": [],
"raw_text": "Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as para- phrase generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 737-762, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword token- izer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Mixup decoding for diverse machine translation",
"authors": [
{
"first": "Jicheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Pengzhi",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xuanfu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "312--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, and Haifeng Wang. 2021. Mixup decoding for diverse machine translation. In Findings of the Association for Computational Lin- guistics: EMNLP 2021, pages 312-320, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "881--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 881-893, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Making tree kernels practical for natural language learning",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2006. Making tree kernels prac- tical for natural language learning. In 11th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 113-120, Trento, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Data diversification: A simple strategy for neural machine translation",
"authors": [
{
"first": "Xuan-Phi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Kui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ai",
"middle": [
"Ti"
],
"last": "Aw",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "10018--10029",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan-Phi Nguyen, Shafiq Joty, Kui Wu, and Ai Ti Aw. 2020. Data diversification: A simple strategy for neural machine translation. Advances in Neural In- formation Processing Systems, 33:10018-10029.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "3956--3965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learning Research, pages 3956-3965. PMLR.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Mi- chael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demon- strations), pages 48-53, Minneapolis, Minnesota. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "COMET: A neural framework for MT evaluation",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"C"
],
"last": "Farinha",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2685--2702",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.213"
]
},
"num": null,
"urls": [],
"raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 2685-2702, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Decoding and diversity in machine translation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Davis",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Roberts, Davis Liang, Graham Neubig, and Zachary C. Lipton. 2020. Decoding and diversity in machine translation. In Advances in Neural In- formation Processing Systems, volume 33. Curran Associates, Inc.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1351--1361",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.115"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2021. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 1351-1361, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Mixture models for diverse machine translation: Tricks of the trade",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "5719--5728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 5719-5728. PMLR.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Generating diverse translations with sentence codes",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1823--1827",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1177"
]
},
"num": null,
"urls": [],
"raw_text": "Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating diverse translations with sentence codes. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1823-1827, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Selecting backtranslated data from multiple sources for improved neural machine translation",
"authors": [
{
"first": "Xabier",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3898--3908",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.359"
]
},
"num": null,
"urls": [],
"raw_text": "Xabier Soto, Dimitar Shterionov, Alberto Poncelas, and Andy Way. 2020. Selecting backtranslated data from multiple sources for improved neural machine trans- lation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3898-3908, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Risam\u00e1lheild: A very large Icelandic text corpus",
"authors": [
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
},
{
"first": "Sigr\u00fan",
"middle": [],
"last": "Helgad\u00f3ttir",
"suffix": ""
},
{
"first": "Eir\u00edkur",
"middle": [],
"last": "R\u00f6gnvaldsson",
"suffix": ""
},
{
"first": "Starka\u00f0ur",
"middle": [],
"last": "Barkarson",
"suffix": ""
},
{
"first": "J\u00f3n",
"middle": [],
"last": "Gu\u00f0nason",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, Sigr\u00fan Helgad\u00f3ttir, Eir\u00edkur R\u00f6gnvaldsson, Starka\u00f0ur Barkarson, and J\u00f3n Gu\u00f0nason. 2018. Risam\u00e1lheild: A very large Icelandic text corpus. In Proceedings of the El- eventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "On the importance of diversity in question generation for QA",
"authors": [
{
"first": "Shubham",
"middle": [],
"last": "Md Arafat Sultan",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Chandel",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Fernandez Astudillo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Castelli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5651--5656",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.500"
]
},
"num": null,
"urls": [],
"raw_text": "Md Arafat Sultan, Shubham Chandel, Ram\u00f3n Fernan- dez Astudillo, and Vittorio Castelli. 2020. On the importance of diversity in question generation for QA. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5651-5656, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Evaluating the evaluation of diversity in natural language generation",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Tevet",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "326--346",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.25"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Tevet and Jonathan Berant. 2021. Evaluating the evaluation of diversity in natural language genera- tion. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 326-346, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Paraphrase generation as zero-shot multilingual translation: Disentangling semantic similarity from lexical and syntactic diversity",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "561--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Thompson and Matt Post. 2020. Paraphrase gen- eration as zero-shot multilingual translation: Disen- tangling semantic similarity from lexical and syn- tactic diversity. In Proceedings of the Fifth Confer- ence on Machine Translation, pages 561-570, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Parallel data, tools and interfaces in OPUS",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Lost in translation: Loss and decay of linguistic richness in machine translation",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Machine Translation Summit XVII: Research Track",
"volume": "",
"issue": "",
"pages": "222--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of lin- guistic richness in machine translation. In Proceed- ings of Machine Translation Summit XVII: Research Track, pages 222-232, Dublin, Ireland. European Association for Machine Translation.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Diverse beam search for improved description of complex scenes",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cogswell",
"suffix": ""
},
{
"first": "Ramprasaath",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. Proceed- ings of the AAAI Conference on Artificial Intelligence, 32(1).",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1042"
]
},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embed- dings with millions of machine translations. In Pro- ceedings of the 56th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '18",
"volume": "",
"issue": "",
"pages": "1097--1100",
"other_ids": {
"DOI": [
"10.1145/3209978.3210080"
]
},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '18, page 1097-1100, New York, NY, USA. Association for Computing Machinery.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "in test set from parallel-only and fine-tuned models Type Parallel-only Finetuned on S \u2192 PP , NP VP ."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 1: The count of the top-ten syntactic groups produced by the parallel-only Turkish\u2192English NMT model compared to the number of those productions produced by a Turkish\u2192English NMT model fine-tuned on the second-most common syntactic group (S -> PP NP VP .). The fine-tuned model produces more examples of the required syntactic group. Input data is the combined WMT test sets."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "BLEU|nrefs:1|case:mixed|eff:no| tok:13a|smooth:exp|version:2.0.0 p a ra ll e l b a se b e"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 3: Mean COMET score on WMT test sets for English\u2194Turkish and English\u2194Icelandic models trained on different BT datasets. For English\u2194Turkish, we give the mean score on WMT 16, WMT 17, and WMT 18 test sets. For English\u2194Icelandic, we give the score on the WMT 21 test set."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The mean percentage change in BLEU score for each model on the test set(s) over the parallel-only models, separated by language direction. The left-hand side (to native) has translated text on the source side and native text on the target side of the test set (back translation). The right-hand side (from native) has native text on the source side and translated text on the target side of the test set."
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">: Diversity metrics for the Turkish BT datasets (original language: English) used to train the Tr\u2192En models. Inter-sentence metrics are calculated on a sample of 30k triplets. 'M' = million.</td></tr><tr><td colspan=\"2\">Dataset base-big</td><td>beam</td><td colspan=\"2\">sampling nucleus</td></tr><tr><td>Sent. len. Word len. Vocab.</td><td>15.65 6.54 1.3M</td><td>14.79 6.91 0.82M</td><td>14.92 7.33 11M</td><td>14.73 7.15 5.6M</td></tr><tr><td>i-BLEU i-chrF</td><td>--</td><td>30.89 16.09</td><td>86.41 66.06</td><td>79.67 57.83</td></tr></table>",
"num": null,
"text": ""
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Diversity metrics for the Icelandic BT datasets (original language: English) used to train the Is\u2192En models. Inter-sentence metrics are calculated on a sample of 30k triplets. 'M' = million.</td></tr></table>",
"num": null,
"text": ""
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"6\">: Diversity metrics for the English BT datasets (original language: Turkish) used to train the En\u2192Tr models. Inter-sentence metrics are calculated on a sample of 30k triplets. 'M' = million.</td></tr><tr><td colspan=\"2\">Dataset base+</td><td>beam</td><td colspan=\"3\">sampl. nucleus syntax</td></tr><tr><td colspan=\"3\">Sent. len. Word len. Vocab. 0.66M 0.41M 20.45 22.75 5.83 5.83</td><td>21.34 6.33 12M</td><td>21.13 6.08 5.6M</td><td>18.29 5.89 0.49M</td></tr><tr><td>i-BLEU i-chrF Kernel</td><td>---</td><td>22.75 11.95 65.72</td><td>92.31 72.20 99.35</td><td>88.86 67.16 98.74</td><td>77.17 56.90 99.40</td></tr></table>",
"num": null,
"text": ""
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Diversity metrics for the English BT data-sets (original language: Icelandic) used to train the En\u2192Is models. Inter-sentence metrics are calculated on a sample of 30k triplets. 'M' = million.</td></tr></table>",
"num": null,
"text": ""
}
}
}
}