|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:40.463916Z" |
|
}, |
|
"title": "Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "El", |
|
"middle": [], |
|
"last": "Moatez", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Billah", |
|
"middle": [], |
|
"last": "Nagoudi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Natural Language Processing Lab", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Natural Language Processing Lab", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hasan", |
|
"middle": [], |
|
"last": "Cavusoglu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multicheckpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n = 10) with a beam width = 100. We achieve 37.57 macro F 1 with a 6 checkpoint model ensemble on the official English to Portuguese shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F 1 and ultimately demonstrating the utility of our intuitive method.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multicheckpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n = 10) with a beam width = 100. We achieve 37.57 macro F 1 with a 6 checkpoint model ensemble on the official English to Portuguese shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F 1 and ultimately demonstrating the utility of our intuitive method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Machine Translation (MT) systems are usually trained to output a single translation. However, many possible translations of a given input text can be acceptable. This situation is common in online language learning applications such as Duolingo, 1 Babbel 2 , and Busuu. 3 In applications of this type, learning happens via translation-based activities while evaluation is performed by comparing learners' responses to a large set of human acceptable translations. Figure 1 shows an example of a typical situation extracted from the Duolingo application.", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 271, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 472, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main set up of the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE 2020) (Mayhew et al., 2020) is such that one starts with a set of English sentences (prompts) and then generates highcoverage sets of plausible translations in the five target languages: Portuguese, Hungarian, Japanese, Korean, and Vietnamese. For instance, if we want to translate the English (En) sentence \"is my explanation clear?\" to Portuguese (Pt), all the translated Portuguese sentences illustrated in Table 1 would be acceptable. 4 Limited training data. One challenge for training a sufficiently effective model we faced is the limited size of the source training data released by organizers (4, 000 source English sentences coupled with 226, 466 Portuguese target sentences). We circumvent this limitation by training a model on a large dataset acquired from the OPUS corpus (as described in Section 3), which gives us a powerful MT system that we build on (see Section 4.2). We then exploit the STAPLE-provided training data in multiple ways (see Sections 4.3 and 4.4) to extend this primary model as a way to nuance the model to the shared task domain. Paraphrase via MT. In essence, the shared task is a mixture of MT and paraphrase. This poses a second challenge: there is no paraphrase dataset to train the system on. For this reason, we resort to using outputs from the MT system in place of paraphrases. This required generating multiple sentences for each source sentence. To meet this need, we generate multiple translation hypotheses (n-Best) using a wide beam search (Section 5.1), perform 'round-trip' translations exploiting these multiple outputs (Section 5.2), and employ ensembles of checkpoints (Section 5.3). Diverse outputs. A third challenge is that the target Portuguese sentences provided for training by organizers are produced by learners of English at various levels of fluency. This makes some of these Portuguese translations inarticulate (i.e., not quite fluent). MT systems are not usually trained to produce inarticulate translations (part of the time), and hence we needed to offer a solution that matches the different levels of language learners who produced the translations. Intuitively, we view MT systems trained at various stages (i.e., checkpoint) as learners with various levels of fluency. As such, we employ an ensemble of checkpoints to generate translations matching the different levels of learner fluency (Section 5.3). Ultimately, our contributions lie in alleviating the 3 challenges listed above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 150, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 563, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1082, |
|
"end": 1103, |
|
"text": "Sections 4.3 and 4.4)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 533, |
|
"end": 540, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows: Section 2 is a brief overview of related work. In Section 3, we describe the data we use for both training and fine-tuning our models. Section 4 presents the proposed MT system. Section 5 describes our different methods. We discuss our results in Section 6, and conclude in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We focus our related work overview on the task of paraphrase generation and its intersection with machine translation. Paraphrasing is the task of expressing the same textual units (e.g. sentence) with alternative forms using different words while keeping the original meaning intact. 5 Over the last few years, MT has been the dominant approach for paraphrase generation. For instance, Barzilay and McKeown (2001) ; Pang et al. (2003) use multiple translations of the same text to train a paraphrase system. Similarly, Bannard and Callison-Burch (2005) More recently, advances in neural machine translation (NMT) have spurred interest in paraphrase generation (Sutskever et al., 2014; Aharoni et al., 2019) . For example, Prakash et al. (2016) employ a stacked residual LSTM network to learn a sequence-to-sequence model on paraphrase data. A parpahrase model with adversarial training is presented by (Li et al., 2017) . Wieting and Gimpel (2017) ; Iyyer et al. (2018) propose a translation-based paraphrasing system, which is based on NTM to translate one side of a parallel corpus. Paraphrase generation with pivot NMT is used by (Mallinson et al., 2017; Yu et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 387, |
|
"end": 414, |
|
"text": "Barzilay and McKeown (2001)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 435, |
|
"text": "Pang et al. (2003)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 553, |
|
"text": "Bannard and Callison-Burch (2005)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 661, |
|
"end": 685, |
|
"text": "(Sutskever et al., 2014;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 707, |
|
"text": "Aharoni et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 920, |
|
"text": "(Li et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 923, |
|
"end": 948, |
|
"text": "Wieting and Gimpel (2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 970, |
|
"text": "Iyyer et al. (2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1134, |
|
"end": 1158, |
|
"text": "(Mallinson et al., 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1159, |
|
"end": 1175, |
|
"text": "Yu et al., 2018)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As part of the STAPLE 2020 shared task, only training data were released. The target training split is a total of 526, 466 of learner translations of 4, 000 input (source) English sentences. We note that the number of translations of each English sentence varies, with an average of \u223c 132 Portuguese target sentences for each English source sentence. As shared task organizers point out, this training dataset can be used as a reference/anchor points, and also serves as a strong baseline. For evaluation, a sets of 60, 294 translations (learner-crafted sentences) of 500 input English sentences were available on Colab. Test data were also made available only via Colab and comprised 500 English sentences learner-translated into 67, 865 Portuguese sentences. For all training, development, and test data, these translations are ranked and weighted according to actual learner response frequency. We refer the reader to the shared task description for more information. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared task data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to develop efficient English-Portuguese MT models that can possibly work across different text domains, we make use of a large dataset of parallel English-Portuguese sentences extracted from the Open Parallel Corpus Project (OPUS) (Tiedemann, 2012) . OPUS 7 contains more than 2.7 billion parallel sentences in 90 languages. The specific corpus we extracted consists of data from multiple domains and sources including: ParaCrawl project (Espl\u00e0-Gomis et al., 2019), EUbookshop (Skadi\u0146\u0161 et al., 2014) , Tilde Model (Rozis and Skadin\u0161, 2017) , translation memories (DGT) (Steinberger et al., 2013) , Open-Subtitles (Creutz, 2018) , SciELO Parallel (Soares et al., 2018) , JRC-Acquis Multilingual (Steinberger et al., 2006) , Tanzil (Zarrabi-Zadeh, 2007), Eu-roparl Parallel (Koehn, 2005) , TED 2013 (Cettolo et al., 2012) , Wikipedia (Wo\u0142k and Marasek, 2014) , Tatoeba 8 , QCRI Educational Domain (Abdelali et al., 2014) , GNOME localization files, 9 Global Voices, 10 KDE4, 11 , Ubuntu, 12 and Multilingual Bible (Christodouloupoulos and Steedman, 2015) . To train our models, we extract more than 77.7M parallel (i.e., English-Portuguese) sentences from the whole collection. The extracted dataset comprises more than 1.5B English tokens and 1.4B Portuguese tokens. More details about the training dataset are given in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 257, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 508, |
|
"text": "(Skadi\u0146\u0161 et al., 2014)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 548, |
|
"text": "(Rozis and Skadin\u0161, 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 604, |
|
"text": "(Steinberger et al., 2013)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 636, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 676, |
|
"text": "(Soares et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 729, |
|
"text": "(Steinberger et al., 2006)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 794, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 828, |
|
"text": "(Cettolo et al., 2012)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 865, |
|
"text": "(Wo\u0142k and Marasek, 2014)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 904, |
|
"end": 927, |
|
"text": "(Abdelali et al., 2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1021, |
|
"end": 1061, |
|
"text": "(Christodouloupoulos and Steedman, 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1328, |
|
"end": 1335, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OPUS data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Pre-processing is an important step in building any MT model as it can significantly affect the end results. We remove punctuation and tokenize all data with the Moses tokenizer (Koehn et al., 2007) . We also use joint Byte-Pair Encoding (BPE) with 60K split operations for subword segmentation (Sennrich et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 198, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 318, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-Processing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section, we first describe the architecture of our models. We then explain the different ways we train the models on various subsets of the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our models are mainly based on a Convolutional Neural Network (CNN) architecture (Kim, 2014; Gehring et al., 2017) . This convolutional architecture exploits BPE (Sennrich et al., 2016) . The architecture is as follows: 20 layers in the encoder and 20 layers in the decoder, a multiplicative attention in every decoder layer, a kernel width of 3 for both the encoder and the decoder, a hidden size 512, and an embedding size of 512, and 256 for the encoder and decoder layers respectively. We use a Fairseq implementation (Ott et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 92, |
|
"text": "(Kim, 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 93, |
|
"end": 114, |
|
"text": "Gehring et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 185, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 540, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We trained two MT models, English-to-Portuguese (En\u2192Pt) and Portuguese-to-English (Pt\u2192En), on 4 V100 GPUs, following the setup described in Ott et al. (2018) . For both models, the learning rate was set to 0.25, a dropout of 0.2, and a maximum tokens of 4, 000 for each mini-batch. We train our models on the 77.7M parallel sentences of the OPUS dataset described in Section 3. Validation is performed on the development data from STAPLE 2020 (Mayhew et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 157, |
|
"text": "Ott et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 464, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic En\u2194Pt Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use the training data of the STAPLE 2020 shared task 13 to create a new En-Pt parallel dataset. More specifically, at the target side, we use all the Portuguese gold translations while duplicating the same English source sentence at the source side. This results in a new training set of 251, 442 En-Pt parallel sentences. We refer to this training dataset as STAPLE-TRAIN, or simply S-TRAIN. We then merge OPUS and S-TRAIN to train an En\u2192Pt model from scratch. We refer to this new model as the extended model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "En\u2192Pt Extended Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Fine-tuning with domain-specific data, from a domain of interest, can be an effective strategy when it is desirable to develop systems for such a domain (Ott et al., 2019 (Ott et al., , 2018 . Motivated by this, we experiment with using the STAPLE-based S-TRAIN parallel dataset from the previous subsection to fine-tune our En\u2192Pt basic model for 5 epochs. 14 We will refer to the model resulting from this fine-tuning process simply as the finetuned model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 170, |
|
"text": "(Ott et al., 2019", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 190, |
|
"text": "(Ott et al., , 2018", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "En\u2192Pt Fine-Tuned Model", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In order to enhance the 1-to-n En-Pt translation, we propose three methods based on the previously discussed MT models (see section 4). These methods are n-Best prediction, multi-checkpoint translation, and paraphrasing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Deployment Methods", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We first use our three MT models (basic, extended, and fine-tuned) with a beam search size of 100 to generate n-Best translation hypotheses. We then use the average log-likelihood to score each of these hypotheses. Finally, we select the hypothesis with the n highest score as our output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "n-Best Prediction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Paraphrasing is an effective data augmentation method which is commonly used in MT tasks (Poliak et al., 2018; Iyyer et al., 2018) . In order to extend the list of accepted Portuguese translations, we use both of our En\u2192Pt and Pt\u2192En models, as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 110, |
|
"text": "(Poliak et al., 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 130, |
|
"text": "Iyyer et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "1. Translate the English sentences using the En\u2192Pt model. For instance, we generate n-Best (n = 10) Portuguese sentences for each English source sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "2. Then, we use the Pt\u2192En model to get n -Best English translations (we experiment with n = 1, 3, and 5) for each of the 10 Portuguese sentence. At this point, we would have 10 * n new English sentences (oftentimes with duplicate generations that we remove). These new sentences represent paraphrases of the original English sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "3. After de-duplication, the new English sentences are fed to the En\u2192Pt model to get the 1-Best Portuguese translation. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our third method is based on saving the models at given epochs (checkpoints) during training. We use the m last checkpoints (models) to generate the n-Best translation hypotheses (the same way as our n-Best prediction method). We then de-duplicate the outputs of all the m models and use them in evaluation. We now describe our evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Checkpoint Translation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In order to evaluate our methods, we carry out a number of experiments. First, we consider performance of each proposed method on the official training and development datasets of STA-PLE (Mayhew et al., 2020) . Our models were ultimately evaluated on the shared task test data. We now describe STAPLE evaluation metrics and baselines as provided by organizers, before report-ing on our results on training, development, and test.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 209, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Weights of Translation. We note that each Portuguese translated sentence has a weight as provided in the gold dataset. The weights of translations correspond to user (learner) response rates. These weights are used primarily for scoring. The STAPLE 2020 shared task data takes the format illustrated in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 310, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics & Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Metrics. Performance of MT systems in the shared task is quantified and scored based on how well a model can return all human-curated acceptable translations, weighted by the likelihood that an English learner would respond with each translation (Mayhew et al., 2020) . As such, the main scoring metric is the weighted macro F 1 , with respect to the accepted translations. To compute weighted macro F 1 (see formula 6), the weighted F 1 for each English sentence (s) is calculated and the average over all the sentences in the corpus is computed. The weighted F 1 (see formula 5) is computed using the unweighted precision (see formula 1) and the weighted recall (see formulas 2, 3 and 4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 267, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics & Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "P recision (s) = T Ps T Ps + F Ns (1) W T Ps = s\u2208T Ps weight(t) (2) W F Ns = s\u2208F Ns weight(t) (3) W eighted Recall (s) = W T Ps W T Ps + W F Ns (4) W eighted F 1(s) = 2 \u2022 P rec. (s) \u2022 W. Recall (s) P rec. (s) + W. Recall (s) (5) W eighted M acro F1 = s\u2208S W eighted F 1(s) |S| (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics & Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Baselines. We adopt the two baselines offered by the task organizers. These are based on Amazon and Fairseq translation systems and are at 21.30% and 13.57%, respectively. More information about these baselines can be reviewed at the shared task site listed earlier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics & Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this section, we report the results of our 3 proposed methods, (a) n-Best prediction, (b) paraphrasing, and (c) multi-checkpoint translation using the MT models presented in section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on TRAIN and DEV", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Evaluation on TRAIN. For (a) the n-Best prediction method, we explore the 4 different values of n in the set {5, 10, 15, 20}. For (b) the paraphrase method, we set the number of Portuguese sentences to n = {1, 3, 5}. Finally, (c) the multi-checkpoint method was tested with 4 different values for the number of checkpoints m = {2, 4, 6, 8}. For paraphrasing and multi-checkpoint translation, we fix the number of n-best translations n to 10, varying the values of n and m only when evaluating our extended model. This leads us to identifying the best evaluation values of n = 3 and m = 6, which we then use when evaluating our basic and fine-tuned models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 112, |
|
"text": "{5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 116, |
|
"text": "10,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 120, |
|
"text": "15,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 133, |
|
"text": "20}. For (b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on TRAIN and DEV", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Evaluation on DEV. For evaluation on the STAPLE development data, we adopt the same procedure followed for evaluation on the train split. Table 4 summarizes our experiments with different configurations (i.e., values of n, n , and m ) on train and development task data, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 145, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on TRAIN and DEV", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Discussion. Results presented in Table 4 demonstrate that all the models with the different methods and configurations outperform the the official shared task baseline with macro F 1 scores between 27.41% and 40.78%. As expected, finetuning the En\u2192Pt basic model with the S-TRAIN data-set improves the results with a mean of +1.46% on the training data. We also observe that training on the concatenated OPUS and S-TRAIN data-sets from scratch leads to better results compared to the exclusive fine-tuning method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 40, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on TRAIN and DEV", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Based on these results, we can see that the best configuration is the multi-checkpoint method used with the extended MT model. This configuration obtains the best macro F 1 score of 40.78% and 39.21% on the training and development STAPLE data splits, respectively. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on TRAIN and DEV", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In test phase, we submitted translations from 3 systems for the STAPLE English-Portuguese sub-task. The 3 systems are based on our multi-checkpoint translation with the extended model. The number of checkpoints used was m = {4, 6, 8}, and n is fixed to 10 (i.e., the best value of n identified on training data with our extended model). Table 5 shows the results of our 3 final submitted systems as returned by the shared task organizers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 344, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on TEST", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Discussion. Our results indicate that when the multi-checkpoint method with the extended model and only two last checkpoints (m = 4) is used, the macro F 1 score reaches 37.07% (with a best precision of 60.14%). This method with m = 6 represents our best macro F 1 score 37.57% for the English-Portuguese translation sub-task. We note that with this configuration we outperform the Amazon and Fairseq translation baseline systems (at +15.92% and +23.99%, respectively) provided by the task organizers. We also observe that when m is set to 8, the macro F 1 slightly decreases to 37.21%. Ultimately, our findings show the utility of using multiple checkpoint ensembles as a way to mimic the various levels of language learners. Simple as this approach is, we find it quite intuitive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on TEST", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In this work, we described our contribution to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020) . Our system targeted the English-Portuguese sub-task. Our models effectively make use of an approach based on n-Best prediction and multi-checkpoint translation. Our use of the OPUS dataset for training proved quite successful. In addition, based on our results, our intuitive deployment of a multi-checkpoint ensemble coupled with n-Best decoded translations seem to mirror leaner proficiency. As future work, we plan to explore other methods on new language pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 173, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://www.duolingo.com/ 2 https://www.babbel.com/ 3 https://www.busuu.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Examples taken from shared task description at: https: //sharedtask.duolingo.com/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://dictionary.cambridge.org/ dictionary/english/paraphrase", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://sharedtask.duolingo.com/#data. 7 http://opus.nlpl.eu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "www.tatoeba.org 9 www.10n.gnome.org 10 www.globalvoices.org/ 11 www.i18n.kde.org 12 www.translations.launchpad.net", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://sharedtask.duolingo.com/#data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We choose the number of epochs arbitrarily, but note that it is a hyper-parameter that can be tuned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "MAM gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The amara corpus: Building parallel language resources for the educational domain", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1044--1054", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The amara corpus: Build- ing parallel language resources for the educational domain. In LREC, volume 14, pages 1044-1054.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Massively multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.00089" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Paraphrasing with bilingual parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "597--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 597-604. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Extracting paraphrases from a parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of the 39th annual meeting of the Associa- tion for Computational Linguistics, pages 50-57.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Wit 3 : Web inventory of transcribed and translated talks", |
|
"authors": [ |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Girardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of the 16 th Confer- ence of the European Association for Machine Trans- lation (EAMT), pages 261-268, Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A massively parallel corpus: the bible in 100 languages. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodouloupoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "375--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2):375-395.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Open subtitles paraphrase corpus for six languages", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.06142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz. 2018. Open subtitles paraphrase corpus for six languages. arXiv preprint arXiv:1809.06142.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Paracrawl: Web-scale parallel corpora for the languages of the eu", |
|
"authors": [ |
|
{ |
|
"first": "Miquel", |
|
"middle": [], |
|
"last": "Espl\u00e0-Gomis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikel", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Forcada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gema", |
|
"middle": [], |
|
"last": "Ram\u00edrez-S\u00e1nchez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of Machine Translation Summit XVII", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "118--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miquel Espl\u00e0-Gomis, Mikel L Forcada, Gema Ram\u00edrez-S\u00e1nchez, and Hieu Hoang. 2019. Paracrawl: Web-scale parallel corpora for the languages of the eu. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Convolutional sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Gehring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Yarats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann N", |
|
"middle": [], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "1243--1252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243-1252. JMLR. org.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Adversarial example generation with syntactically controlled paraphrase networks", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.06059" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. arXiv preprint arXiv:1804.06059.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1408.5882" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "MT summit", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Open source toolkit for statistical machine translation: Factored translation models and confusion network decoding", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Final Report of the Johns Hopkins", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Marcello Federico, Wade Shen, Nicola Bertoldi, Ondrej Bojar, Chris Callison-Burch, Brooke Cowan, Chris Dyer, Hieu Hoang, Richard Zens, et al. 2007. Open source toolkit for statisti- cal machine translation: Factored translation models and confusion network decoding. In Final Report of the Johns Hopkins 2006 Summer Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Paraphrase generation with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00279" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2017. Paraphrase generation with deep reinforce- ment learning. arXiv preprint arXiv:1711.00279.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Stanford neural machine translation systems for spoken language domains", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Workshop on Spoken Language Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proceedings of the In- ternational Workshop on Spoken Language Transla- tion, pages 76-79.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Effective approaches to attentionbased neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.04025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Paraphrasing revisited with neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Mallinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "881--893", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 881-893.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Simultaneous translation and paraphrase for language education", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klinton", |
|
"middle": [], |
|
"last": "Bicknell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Mcdowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Burr", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, and Burr Settles. 2020. Si- multaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.01038" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Scaling neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1806.00187" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. arXiv preprint arXiv:1806.00187.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "102--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Ex- tracting paraphrases and generating new sentences. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 102-109. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "On the evaluation of semantic phenomena in neural machine translation using natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.09779" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Poliak, Yonatan Belinkov, James Glass, and Benjamin Van Durme. 2018. On the evaluation of semantic phenomena in neural machine translation using natural language inference. arXiv preprint arXiv:1804.09779.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Neural paraphrase generation with stacked residual lstm networks", |
|
"authors": [ |
|
{ |
|
"first": "Aaditya", |
|
"middle": [], |
|
"last": "Prakash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sadid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathy", |
|
"middle": [], |
|
"last": "Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashequl", |
|
"middle": [], |
|
"last": "Datla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joey", |
|
"middle": [], |
|
"last": "Qadir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oladimeji", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Farri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1610.03098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual lstm networks. arXiv preprint arXiv:1610.03098.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Tilde model-multilingual open data for eu languages", |
|
"authors": [ |
|
{ |
|
"first": "Roberts", |
|
"middle": [], |
|
"last": "Rozis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raivis", |
|
"middle": [], |
|
"last": "Skadin\u0161", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", |
|
"volume": "131", |
|
"issue": "", |
|
"pages": "263--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberts Rozis and Raivis Skadin\u0161. 2017. Tilde model-multilingual open data for eu languages. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017, Gothenburg, Sweden, 131, pages 263-265. Link\u00f6ping University Electronic Press.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Billions of parallel words for free: Building and using the eu bookshop corpus", |
|
"authors": [ |
|
{ |
|
"first": "Raivis", |
|
"middle": [], |
|
"last": "Skadi\u0146\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberts", |
|
"middle": [], |
|
"last": "Rozis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daiga", |
|
"middle": [], |
|
"last": "Deksne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raivis Skadi\u0146\u0161, J\u00f6rg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the eu bookshop corpus. In Proceedings of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A large parallel corpus of full-text scientific articles", |
|
"authors": [ |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Soares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviane", |
|
"middle": [], |
|
"last": "Moreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felipe Soares, Viviane Moreira, and Karin Becker. 2018. A large parallel corpus of full-text scientific articles. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC-2018).", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Dgttm: A freely available translation memory in 22 languages", |
|
"authors": [ |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Eisele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Szymon", |
|
"middle": [], |
|
"last": "Klocek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spyridon", |
|
"middle": [], |
|
"last": "Pilos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Schl\u00fcter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1309.5226" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl\u00fcter. 2013. Dgt- tm: A freely available translation memory in 22 lan- guages. arXiv preprint arXiv:1309.5226.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages", |
|
"authors": [ |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Pouliquen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Widiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Camelia", |
|
"middle": [], |
|
"last": "Ignat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaz Erjavec, Dan Tufis, and D\u00e1niel Varga. 2006. The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages. arXiv preprint cs/0609058.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Parallel data, tools and interfaces in opus", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "2012", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. 2012:2214-2218.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.05732" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting and Kevin Gimpel. 2017. Paranmt-50m: Pushing the limits of paraphrastic sentence embed- dings with millions of machine translations. arXiv preprint arXiv:1711.05732.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs", |
|
"authors": [ |
|
{ |
|
"first": "Krzysztof", |
|
"middle": [], |
|
"last": "Wo\u0142k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krzysztof", |
|
"middle": [], |
|
"last": "Marasek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Procedia Technology", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "126--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krzysztof Wo\u0142k and Krzysztof Marasek. 2014. Build- ing subject-aligned comparable corpora and mining it for truly parallel sentence pairs. Procedia Technol- ogy, 18:126-132.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Qanet: Combining local convolution with global self-attention for reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Adams", |
|
"middle": [ |
|
"Wei" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Dohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.09541" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehen- sion. arXiv preprint arXiv:1804.09541.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Tanzil project", |
|
"authors": [ |
|
{ |
|
"first": "Hamid", |
|
"middle": [], |
|
"last": "Zarrabi-Zadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamid Zarrabi-Zadeh. 2007. Tanzil project. URL: http://tanzil. net/wiki/Tanzil Project.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Translations proposed by English language learners at various levels of fluency, from diverse backgrounds. Our multi-checkpoint ensemble models mimic learner fluency. 4", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "An illustration of our proposed models and methods: (a) n-Best prediction method with n = 10 resulting in the En\u2192Pt basic model; (b) paraphrasing method with n = 10 and n = 3 used in the En\u2192Pt fine-tuning and the En\u2194Pt basic models, (c) multi-checkpoint method used with n = 10 and m = 4 for the En\u2192Pt extended model.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "use an MT phrase table to mapping an English sentences to various non-English sentences.", |
|
"content": "<table><tr><td colspan=\"2\">English sentence is my explanation clear?</td></tr><tr><td/><td>-minha explica\u00e7\u00e3o est\u00e1 clara?</td></tr><tr><td>Accepted</td><td>-minha explica\u00e7\u00e3o\u00e9 clara?</td></tr><tr><td>Portuguese</td><td>-a minha explica\u00e7\u00e3o\u00e9 clara?</td></tr><tr><td>Translations</td><td>-est\u00e1 clara minha explica\u00e7\u00e3o?</td></tr><tr><td/><td>-minha explana\u00e7\u00e3o est\u00e1 clara?</td></tr><tr><td/><td>-\u00e9 clara minha explica\u00e7\u00e3o?</td></tr><tr><td colspan=\"2\">English sentence you look so pretty!</td></tr><tr><td/><td>-voc\u00ea est\u00e1 t\u00e3o linda!</td></tr><tr><td>Accepted</td><td>-voc\u00ea est\u00e1 t\u00e3o bonita!</td></tr><tr><td>Portuguese</td><td>-voc\u00ea est\u00e1 muito linda!</td></tr><tr><td>Translations</td><td>-voc\u00ea est\u00e1 muito bonita!</td></tr><tr><td/><td>-voc\u00ea parece t\u00e3o linda!</td></tr><tr><td/><td>-voc\u00ea parece t\u00e3o bonita!</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"content": "<table><tr><td>: English sentences with their Portuguese trans-</td></tr><tr><td>lation samples from shared task training split.</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "English sentences with their Portuguese translation and Weights samples from shared task train data.", |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "Performance on the STAPLE 2020 Train and Dev data splits.", |
|
"content": "<table><tr><td/><td colspan=\"2\">Extended Model</td><td/></tr><tr><td>Method</td><td colspan=\"3\">m Prec. W. Recall W. F1</td></tr><tr><td>Aws Baseline</td><td>-87.80</td><td>13.98</td><td>21.29</td></tr><tr><td>Fairseq Baseline</td><td>-28.25</td><td>11.70</td><td>13.57</td></tr><tr><td/><td>4 60.14</td><td>33.14</td><td>37.06</td></tr><tr><td>Multi-Checkpoint</td><td>6 53.83</td><td>36.50</td><td>37.57</td></tr><tr><td/><td>8 49.94</td><td>38.27</td><td>37.21</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"text": "Results on STAPLE 2020 Test Data.", |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |