|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:28.504249Z" |
|
}, |
|
"title": "Log-Linear Reformulation of the Noisy Channel Model for Document-Level Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We seek to maximally use various data sources, such as parallel and monolingual data, to build an effective and efficient documentlevel translation system. In particular, we start by considering a noisy channel approach (Yu et al., 2020) that combines a target-to-source translation model and a language model. By applying Bayes' rule strategically, we reformulate this approach as a log-linear combination of translation, sentence-level and documentlevel language model probabilities. In addition to using static coefficients for each term, this formulation alternatively allows for the learning of dynamic per-token weights to more finely control the impact of the language models. Using both static or dynamic coefficients leads to improvements over a context-agnostic baseline and a context-aware concatenation model.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We seek to maximally use various data sources, such as parallel and monolingual data, to build an effective and efficient documentlevel translation system. In particular, we start by considering a noisy channel approach (Yu et al., 2020) that combines a target-to-source translation model and a language model. By applying Bayes' rule strategically, we reformulate this approach as a log-linear combination of translation, sentence-level and documentlevel language model probabilities. In addition to using static coefficients for each term, this formulation alternatively allows for the learning of dynamic per-token weights to more finely control the impact of the language models. Using both static or dynamic coefficients leads to improvements over a context-agnostic baseline and a context-aware concatenation model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural machine translation (NMT) Bahdanau et al., 2015) has been reported to reach near human-level performance on sentence-by-sentence translation (L\u00e4ubli et al., 2018) . Going beyond sentence-level, documentlevel NMT aims to translate sentences by taking into account neighboring source or target sentences in order to produce a more cohesive output (Jean et al., 2017; Wang et al., 2017; Maruf et al., 2019) . These approaches often train new models from scratch using parallel data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 55, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 169, |
|
"text": "(L\u00e4ubli et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 371, |
|
"text": "(Jean et al., 2017;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 390, |
|
"text": "Wang et al., 2017;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 410, |
|
"text": "Maruf et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, in a similar spirit to Voita et al. (2019a) ; , we seek a document-level approach that maximally uses various available corpora, such as parallel and monolingual data, leveraging models trained at the sentence and document levels, while also striving for computational efficiency. We start from the noisy channel model which combines a target-to-source translation model and a document-level language model. By applying Bayes' rule, we reformulate this approach into a log-linear model. It consists of a translation model, as well as sentence and document-level language models. This reformulation admits an auto-regressive expression of tokenby-token target document probabilities, facilitating the use of existing inference algorithms such as beam search. In this log-linear model, there are coefficients modulating the impact of the language models. We first consider static coefficients and, for more fine-grained control, we train a merging module that dynamically adjusts the LM weights.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 58, |
|
"text": "Voita et al. (2019a)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With either static or dynamic coefficients, we observe improvements over a context-agnostic baseline, as well as a context-aware concatenation model (Tiedemann and Scherrer, 2017) . Similarly to the noisy channel model, our approach reuses off-the-shelf models and benefits from future translation or language modelling improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 179, |
|
"text": "(Tiedemann and Scherrer, 2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given the availability of various heterogeneous data sources that could be used for document-level translation, we seek a strategy to maximally use them. These sources include parallel data, at either the sentence or document level, as well as more broadly available monolingual data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As the starting point, we consider the noisy channel approach proposed by . Given a source document (X (1) , . . . , X (N ) ) and its translation (Y (1) , . . . , Y (N ) ), they assume a generation process where target sentences are produced from left to right, and where each source sentence is translated only from the corresponding target sentence. Under these assumptions, the probability of a source-target document pair is given by", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 123, |
|
"text": "(N )", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P (X (1) , . . . ,X (N ) , Y (1) , . . . , Y (N ) ) = N n=1 P (X (n) |Y (n) )P (Y (n) |Y (<n) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As such, the conditional probability of the target document given the source is expressed by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P (Y (1) , . . . , Y (N ) |X (1) , . . . , X (N ) ) \u221d N n=1 P (X (n) |Y (n) )P (Y (n) |Y (<n) ) = N n=1 P (Y (n) |X (n) ) P (Y (n) |Y (<n) ) P (Y (n) ) \u221dP (Y (n) |X (n) ,Y (<n) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We therefore generate context-aware translations by combining a translation model (TM) P (Y (n) |X (n) ) with both sentence-level P (Y (n) ) and document-level P (Y (n) |Y (<n) ) language models (LM). To calibrate the generation process, we introduce coefficients \u03b1 \u2208 R and \u03b2 \u2208 R to control the contribution of each language model, which are tuned on a validation set:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "log P (Y (n) |X (n) , Y (<n) ) (1) = Ln i=1 log P (y (n) i |y (n) <i , X (n) ) + \u03b1 log P (y (n) i |y (n) <i , Y (<n) ) \u2212 \u03b2 log P (y (n) i |y (n) <i ) + C (n) i , where C (n) i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is a normalization constant and L n is the target sentence length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similarly to the noisy channel approach , we use off-the-shelf translation and language models. As such, future improvements to either translation or language modelling can easily be leveraged. Our reformulation however admits a more efficient search procedure, unlike that by .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-linear reformulation of the noisy channel model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The translation model is implemented as any auto-regressive neural translation model. We use the Transformer encoder-decoder architecture (Vaswani et al., 2017) . Given a source sentence x 1 , . . . , x L , each token and its position are projected into a continuous embedding s 0,1 , . . . , s 0,L . These representations are passed through a sequence of M encoder layers that each comprise self-attention and feed-forward modules, resulting in the final representations s M,1 , . . . , s M,L . The decoder updates target embeddings through similar layers, which additionally attend to the encoder output, to obtain final hidden states t M,1 , . . . , t M,L . Token probabilities may be obtained by projecting these representations and applying softmax normalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 160, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model parameterization", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Language models are implemented as Transformer decoders without cross-attention. We use a single language model trained on sequences of consecutive sentences to obtain both sentence-level and document-level probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model parameterization", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As extra-sentential information is not uniformly useful for translation, we propose dynamic coefficients for the different models by generalizing Eq. 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic merging", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "L = \u2212 N n=1 Ln i=1 log P (y (n) i |y (n) <i , X (n) ) + \u03b1 (n) i log P (y (n) i |y (n) <i , Y (<n) ) \u2212 \u03b2 (n) i log P (y (n) i |y (n) <i ) + C (n) i . (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic merging", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "With the translation and language models kept fixed, the coefficients \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic merging", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(n) i and \u03b2 (n) i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic merging", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "are computed by an auxiliary neural network which uses Y (<n) , Y (n) and X (n) . We call this network a merging module and implement it as a feed-forward network on top of the translation and language models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 79, |
|
"text": "(n)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic merging", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For every token, the corresponding last hidden states of the translation model, sentence-level LM and document-level LM are concatenated. Each non-final layer (k = 1, . . . , K \u2212 1) is a feedforward block", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic coefficient computation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h k = LN(h k\u22121 +drop(W k,2 (ReLU(W k,1 h k\u22121 ))),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic coefficient computation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where LN and drop respectively denote layer normalization and dropout (Ba et al., 2016; Srivastava et al., 2014) . The final layer is similar, but there is no residual connection (and no dropout) as the final linear transformation projects the result to 2 dimensions, so that (\u03b1, \u03b2) = W K,2 (ReLU(W K,1 h K\u22121 )).", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 87, |
|
"text": "(Ba et al., 2016;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 112, |
|
"text": "Srivastava et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic coefficient computation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Data We run experiments on English-Russian data from OpenSubtitles (Lison et al., 2018) , which was used in many recent studies on document-level translation (Voita et al., 2019b,a; Mansimov et al., 2020; Jean et al., 2019) . Language models are trained on approximately 30M sequences of 4 consecutive sentences (Voita et al., 2019a) .The parallel data was originally preprocessed by Voita et al. (2019b) , yielding 6M examples. For 1.5M of these data points, the 3 preceding source and target sentences are provided. We use this subset to train the merging module that predicts the per-token coefficients for each model. We uniformly set the number of contextual sentences between 1 and 3 to match the test condition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 87, |
|
"text": "(Lison et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 181, |
|
"text": "(Voita et al., 2019b,a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 204, |
|
"text": "Mansimov et al., 2020;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 223, |
|
"text": "Jean et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 333, |
|
"text": "(Voita et al., 2019a)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 404, |
|
"text": "Voita et al. (2019b)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We apply byte-pair encoding (BPE) (Sennrich et al., 2016) , with a total of 32k merge operations, separately on each language pair, as Russian and English use different sets of alphabets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 57, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Models Translation models are standard Transformers in their base configuration (Vaswani et al., 2017 ). The language model is implemented as a Transformer decoder of the same size, except for a smaller feed-forward dimension d f f = 1024. The merging module has 2 layers, with d f f = 1536.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 101, |
|
"text": "(Vaswani et al., 2017", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Learning The translation and language models, as well as the merging module, are trained with label smoothing set to 10%. The TM is trained with 20% dropout, while it is set to 10% for the LMs and merging module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Evaluation Translation quality is evaluated with tokenized BLEU on lowercased data, using beam search with its width set to 5. We average 5 checkpoints for the translation models. Sentences are generated from left to right, and the beam is reset for every sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "With our approach, using static coefficients, we reach a BLEU score of 34.31, which is a modest gain of 0.21 BLEU over the baseline and 0.8 over a model trained on concatenated sentences ( DocRepair (Voita et al., 2019a) , a two-pass method that post-edits the output of a baseline system, obtains a slightly higher BLEU score of 34.60. Both approaches could be combined by instead post-editing the output of our models, which we leave for future investigation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 220, |
|
"text": "(Voita et al., 2019a)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We observe limited correlation between BLEU and reference NLL (Och, 2003; Lee et al., 2020) . On the validation set, the per-token baseline loss (with label smoothing) is 13.09. Using static coefficients, it actually increases to 13.23, while it decreases to 12.86 with dynamic coefficients. Table 2 presents the BLEU scores on the validation set using greedy validation for different static values of \u03b1 and \u03b2. Only using the document-level LM (\u03b1 > 0, \u03b2 = 0) leads to worse performance than the baseline. It is critical to counter-balance the document-level LM with the sentence-level LM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 73, |
|
"text": "(Och, 2003;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 91, |
|
"text": "Lee et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 299, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLEU-NLL correlation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dynamic coefficients The dynamic coefficients \u03b1 and \u03b2 predicted by the merging module are highly correlated (Figure 1 (left) ). As a conjecture, this high correlation may be explained by the use of the same language model to obtain both sentence and document-level scores. Figure 1 (right) shows the average value of the dynamic coefficient \u03b1 for frequent words within the validation reference set. In particular, \u0422\u044b and \u0412\u044b, which are translations of you that depend on plurality and formality, are assigned high weights.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 124, |
|
"text": "(Figure 1 (left)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 289, |
|
"text": "Figure 1 (right)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contribution of each language model (static)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Challenge sets While static and dynamic coefficients lead to similar BLEU, using dynamic coefficients often results in better performance on multiple-choice scoring-based challenge sets targeting specific translation phenomena (Table 3) (Voita et al., 2019b) . 1 We conjecture this likely happens because dynamic coefficients can more narrowly focus on particular subsets of target sentences that benefit from document-level context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 258, |
|
"text": "(Voita et al., 2019b)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 262, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 236, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contribution of each language model (static)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Document-level NMT Neural machine translation may be extended to include extra-sentential information in many ways, as surveyed by Maruf et al. (2019) . The model architecture may be modified, for example by encoding previous source sentences or generated translations and attending to them (Jean et al., 2017; Wang et al., 2017; Voita et al., 2018; Miculicich et al., 2018; Maruf and Haffari, 2018; Tu et al., 2018) . Otherwise, by simply concatenating multiple sentences together as input, existing model architectures may be used without additional changes (Tiedemann and Scherrer, 2017; Junczys-Dowmunt, 2019) . Voita et al. (2019b) and Voita et al. (2019a) propose refining the output of a context-agnostic baseline, using a new model trained from either document-level parallel data or from round-trip translated monolingual data. The noisy channel approach similarly uses large-scale monolingual data to refine translations, while using arbitrary, and potentially pre-trained, translation or language models, as discussed in Sec. 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 150, |
|
"text": "Maruf et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 310, |
|
"text": "(Jean et al., 2017;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 329, |
|
"text": "Wang et al., 2017;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 349, |
|
"text": "Voita et al., 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 374, |
|
"text": "Miculicich et al., 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 399, |
|
"text": "Maruf and Haffari, 2018;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 416, |
|
"text": "Tu et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 590, |
|
"text": "(Tiedemann and Scherrer, 2017;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 613, |
|
"text": "Junczys-Dowmunt, 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 636, |
|
"text": "Voita et al. (2019b)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 661, |
|
"text": "Voita et al. (2019a)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our approach shares many similarities with the above, but admits a more straightforward generation process. If desired, we could still rerank the beam search output with a channel model, which might improve general translation quality for reasons not necessarily related to context. Language modelling Language model probabilities have been used to rerank NMT hypotheses (see, e.g., Stahlberg et al., 2019) . Additionally, direct integration of a language model into a translation model, using various fusion techniques, improves generation quality and admits the use of single-pass search algorithms (Gulcehre et al., 2015) . To promote diversity in dialogue systems, model scores may be adjusted by negatively weighing a language model (Li et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 406, |
|
"text": "Stahlberg et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 624, |
|
"text": "(Gulcehre et al., 2015)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 755, |
|
"text": "(Li et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we set to use heterogeneous data sources in an effective and efficient manner for document-level NMT. We reformulated the noisy channel approach and end up with a left-to-right log-linear model combining a baseline machine translation model with sentence-level and document-level language models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To modulate the impact of the language models, we dynamically adapt their coefficients at each time step with a merging module taking into account the translation and language models. We observe improvements over a context-agnostic baseline and using dynamic coefficients helps capture documentlevel linguistic phenomena better.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Future directions include combining our approach with MT models trained on back-translated documents, exploring its applicability to other modalities such as vision and speech, and considering deeper fusion of the models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The conditional probability of the target document given the source is expressed by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Expanded derivation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P (Y (1) , ...Y (N ) |X (1) , ..., X (N ) ) = N n=1 P (X (n) |Y (n) )P (Y (n) |Y (<n) ) P (X (1) , ..., X (N ) ) = N n=1 P (Y (n) |X (n) )P (X (n) ) P (Y (n) ) P (Y (n) |Y (<n) ) P (X (1) , ..., X (N ) ) = C(X) N n=1 P (Y (n) |X (n) ) P (Y (n) |Y (<n) ) P (Y (n) ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Expanded derivation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where (N ) ) does not affect the optimal target sentences given a source document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 10, |
|
"text": "(N )", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Expanded derivation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "C(X) = N n=1 P (X (n) ) P (X (1) ,...,X", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Expanded derivation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Translation model We validate models with greedy search. We use the base transformer configuration (Vaswani et al., 2017) . We use effective batches of approximately 31500 source tokens and optimize models with Adam (Kingma and Ba, 2014) . We follow a learning rate schedule similar to Vaswani et al. (2017) , with 16,000 warmup steps and scaled by 4. We experimented with 10% and 20% dropout, obtaining higher validation BLEU with the latter. We use pre-LN transformer layers (Xiong et al., 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 237, |
|
"text": "Ba, 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 307, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Hyper-parameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a similar configuration to the translation model, except with 64,000 warmup steps and post-LN transformer layers (Xiong et al., 2020).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluate greedy validation BLEU with a grid search over (\u03b1, \u03b2) \u2208 {0, 0.1, . . . , 1} \u00d7 {0, 0.1, . . . , 1}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Static coefficients", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We varied the number of layers between 1 and 3. We also considered adding cross-attention within the merging module, but we did not observe improvements in preliminary experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic coefficients", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If we train the merging module without label smoothing (instead of 10%), greedy validation BLEU drops by approximately 1 BLEU point. We also observe much higher variability in the coefficients, which may be caused by the unbounded optimal value of \u03b1 when a target token is the most likely according to the document-level LM. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Label smoothing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We train models with PyTorch 1.2.0 (Paszke et al., 2019). We use a single NVIDIA 1080 Ti or 2080 Ti, running CUDA 10.2 on CentOS Linux 7 (Core).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F Computing infrastructure", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Data: https://box.com/shared/static/ qmad0j3e6qknas9nwznyw1w0l5vgpdf4. zip multi_bleu.perl: https://raw.githubusercontent. com/moses-smt/mosesdecoder/ master/scripts/generic/ multi-bleu.perl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "G Links", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using the difference of language models scores gives higher accuracy, but they cannot be used in isolation to generate relevant translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI), Samsung Research (Improving Deep Learning using Latent Structure) and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "On using monolingual corpora in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Loic", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huei-Chi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.03535" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv preprint arXiv:1503.03535.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Fill in the blanks: Imputing missing sentences for larger-context neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.14075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00e9bastien Jean, Ankur Bapna, and Orhan Firat. 2019. Fill in the blanks: Imputing missing sentences for larger-context neural machine translation. arXiv preprint arXiv:1910.14075.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Does neural machine translation benefit from larger context? arXiv preprint", |
|
"authors": [ |
|
{ |
|
"first": "Sebastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislas", |
|
"middle": [], |
|
"last": "Lauly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.05135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine trans- lation benefit from larger context? arXiv preprint arXiv:1704.05135.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "225--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt. 2019. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 225-233, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Has machine translation achieved human parity? a case for document-level evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "L\u00e4ubli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Volk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4791--4796", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel L\u00e4ubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791-4796.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "On the discrepancy between density estimation and sequence generation", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dustin", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.07233" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Lee, Dustin Tran, Orhan Firat, and Kyunghyun Cho. 2020. On the discrepancy between density es- timation and sequence generation. arXiv preprint arXiv:2002.07233.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A diversity-promoting objective function for neural conversation models", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1510.03055" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milen", |
|
"middle": [], |
|
"last": "Kouylekov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pierre Lison, J\u00f6rg Tiedemann, and Milen Kouylekov. 2018. Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel cor- pora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Capturing document context inside sentence-level neural machine translation models with self-training", |
|
"authors": [ |
|
{ |
|
"first": "Elman", |
|
"middle": [], |
|
"last": "Mansimov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.05259" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elman Mansimov, G\u00e1bor Melis, and Lei Yu. 2020. Capturing document context inside sentence-level neural machine translation models with self-training. arXiv preprint arXiv:2003.05259.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Document context neural machine translation with memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Sameen", |
|
"middle": [], |
|
"last": "Maruf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1275--1284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameen Maruf and Gholamreza Haffari. 2018. Docu- ment context neural machine translation with mem- ory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1275-1284.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A survey on document-level machine translation: Methods and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Sameen", |
|
"middle": [], |
|
"last": "Maruf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahimeh", |
|
"middle": [], |
|
"last": "Saleh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1912.08494" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameen Maruf, Fahimeh Saleh, and Gholamreza Haf- fari. 2019. A survey on document-level machine translation: Methods and evaluation. arXiv preprint arXiv:1912.08494.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Document-level neural machine translation with hierarchical attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Lesly", |
|
"middle": [], |
|
"last": "Miculicich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhananjay", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2947--2954", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention net- works. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947-2954.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics-Volume 1, pages 160-167. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1715-1725.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Cued@ wmt19: Ewc&lms", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Saunders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "364--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Stahlberg, Danielle Saunders, Adri\u00e0 de Gispert, and Bill Byrne. 2019. Cued@ wmt19: Ewc&lms. In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 364-373.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Neural machine translation with extended context", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Scherrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Third Workshop on Discourse in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann and Yves Scherrer. 2017. Neural ma- chine translation with extended context. In Proceed- ings of the Third Workshop on Discourse in Machine Translation, pages 82-92.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning to remember translation history with a continuous cache", |
|
"authors": [ |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "407--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association of Computational Linguistics, 6:407-420.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Context-aware monolingual repair for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. Context-aware monolingual repair for neural ma- chine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and 9th International Joint Con- ference on Natural Language Processing, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019b. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Context-aware neural machine translation learns anaphora resolution", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Serdyukov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1264--1274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine transla- tion learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1264-1274.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Exploiting cross-sentence context for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Longyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2826--2831", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2826-2831.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Putting machine translation in context with the noisy channel model", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Sartran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Stokowiec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2020. Putting machine translation in context with the noisy channel model. TACL.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Improving the transformer translation model with document-level context", |
|
"authors": [ |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feifei", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "533--542", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 533-542.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>).</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>: Deixis (D), lexical cohesion (LC), inflection</td></tr><tr><td>ellipsis (I) and VP ellipsis (VP) accuracy (%). Best</td></tr><tr><td>scores from translation models only are highlighted.</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |