|
{ |
|
"paper_id": "P16-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:57:38.241948Z" |
|
}, |
|
"title": "Graph-Based Translation Via Graph Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Liangyou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "One major drawback of phrase-based translation is that it segments an input sentence into continuous phrases. To support linguistically informed source discontinuity, in this paper we construct graphs which combine bigram and dependency relations and propose a graph-based translation model. The model segments an input graph into connected subgraphs, each of which may cover a discontinuous phrase. We use beam search to combine translations of each subgraph left-to-right to produce a complete translation. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model by up to +1.5/+0.5 BLEU scores. By explicitly modeling the graph segmentation, our system obtains further improvement, especially on German-English.", |
|
"pdf_parse": { |
|
"paper_id": "P16-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "One major drawback of phrase-based translation is that it segments an input sentence into continuous phrases. To support linguistically informed source discontinuity, in this paper we construct graphs which combine bigram and dependency relations and propose a graph-based translation model. The model segments an input graph into connected subgraphs, each of which may cover a discontinuous phrase. We use beam search to combine translations of each subgraph left-to-right to produce a complete translation. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model by up to +1.5/+0.5 BLEU scores. By explicitly modeling the graph segmentation, our system obtains further improvement, especially on German-English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Statistical machine translation (SMT) starts from sequence-based models. The well-known phrasebased (PB) translation model (Koehn et al., 2003) has significantly advanced the progress of SMT by extending translation units from single words to phrases. By using phrases, PB models can capture local phenomena, such as word order, word deletion, and word insertion. However, one of the significant weaknesses in conventional PB models is that only continuous phrases are used, so generalizations such as French ne . . . pas to English not cannot be learned. To solve this, syntax-based models (Galley et al., 2004; Chiang, 2005; Marcu et al., 2006) take tree structures into consideration to learn translation patterns by using non-terminals for generalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 143, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 612, |
|
"text": "(Galley et al., 2004;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 626, |
|
"text": "Chiang, 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 646, |
|
"text": "Marcu et al., 2006)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "C D S (Koehn et al., 2003) \u2022 sequence (Galley and Manning, 2010) \u2022", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 26, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 38, |
|
"end": 64, |
|
"text": "(Galley and Manning, 2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 sequence and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 tree This work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \u2022 graph Table 1 : Comparison between our work and previous work in terms of three aspects: keeping continuous phrases (C), allowing discontinuous phrases (D), and input structures (S).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 17, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, the expressiveness of these models is confined by hierarchical constraints of the grammars used (Galley and Manning, 2010 ) since these patterns still cover continuous spans of an input sentence. By contrast, , and Xiong et al. (2007) take treelets from dependency trees as the basic translation units. These treelets are connected and may cover discontinuous phrases. However, their models lack the ability to handle continuous phrases which are not connected in trees but could in fact be extremely important to system performance (Koehn et al., 2003) . Galley and Manning (2010) directly extract discontinuous phrases from input sequences. However, without imposing additional restrictions on discontinuity, the amount of extracted rules can be very large and unreliable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 130, |
|
"text": "(Galley and Manning, 2010", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 243, |
|
"text": "Xiong et al. (2007)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 562, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 590, |
|
"text": "Galley and Manning (2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Different from previous work (as shown in Table 1), in this paper we use graphs as input structures and propose a graph-based translation model to translate a graph into a target string. The basic translation unit in this model is a connected subgraph which may cover discontinuous phrases. The main contributions of this work are summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We propose to use a graph structure to combine a sequence and a tree (Section 3.1). The graph contains both local relations between words from the sequence and long-distance relations from the tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We present a translation model to translate a graph (Section 3). The model segments the graph into subgraphs and uses beam search to generate a complete translation from left to right by combining translation options of each subgraph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We present a set of sparse features to explicitly model the graph segmentation (Section 4). These features are based on edges in the input graph, each of which is either inside a subgraph or connects the subgraph with a previous subgraph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Experiments (Section 5) on Chinese-English and German-English tasks show that our model is significantly better than the PB model. After incorporating the segmentation model, our system achieves still further improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first review the basic PB translation approach, which will be extended to our graph-based translation model. Given a pair of sentences S, T , the conventional PB model is defined as Equation (1):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p(t I 1 | s I 1 ) = I i=1 p(t i |s a i )d(s a i , s a i\u22121 ) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The target sentence T is broken into I phrases", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "t 1 \u2022 \u2022 \u2022 t I ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "each of which is a translation of a source phrase s a i . d is a distance-based reordering model. Note that in the basic PB model, the phrase segmentation is not explicitly modeled which means that different segmentations are treated equally (Koehn, 2010) . The performance of PB translation relies on the quality of phrase pairs in a translation table. Conventionally, a phrase pair s, t has two properties: (i) s and t are continuous phrases. (ii) s, t is consistent with a word alignment A (Och and Ney, 2004):", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 255, |
|
"text": "(Koehn, 2010)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2200(i, j) \u2208 A, s i \u2208 s \u21d4 t j \u2208 t and \u2203s i \u2208 s, t j \u2208 t, (i, j) \u2208 A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "PB decoders generate hypotheses (partial translations) from left to right. Each hypothesis maintains a coverage vector to indicate which source words have been translated so far. A hypothesis can be extended on the right by translating an 0 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 \u2022 \u2022 2 \u2022 \u2022 \u2022\u2022 \u2022\u2022 3 \u2022\u2022\u2022 \u2022\u2022\u2022 \u2022\u2022\u2022 4 \u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Figure 1: Beam search for phrase-based MT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 denotes a covered source position while indicates an uncovered position (Liu and Huang, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 95, |
|
"text": "(Liu and Huang, 2014)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "uncovered source phrase. The translation process ends when all source words have been translated. Beam search (as in Figure 1 ) is taken as an approximate search strategy to reduce the size of the decoding space. Hypotheses which cover the same number of source words are grouped in a stack. Hypotheses can be pruned according to their partial translation cost and an estimated future cost.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Review: Phrase-based Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our graph-based translation model extends PB translation by translating an input graph rather than a sequence to a target string. The graph is segmented into a sequence of connected subgraphs, each of which corresponds to a target phrase, as in Equation (2):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-Based Translation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-Based Translation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "p(t I 1 | G(s I 1 )) = I i=1 p(t i |G(s a i ))d(G(s a i ), G(s a i\u22121 )) \u2248 I i=1 p(t i |G(s a i ))d(s a i ,s a i\u22121 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-Based Translation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where G(s i ) denotes a connected source subgraph which covers a (discontinuous) phrases i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-Based Translation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a more powerful and natural structure for sentence modeling, a graph can model various kinds of word-relations together in a unified representation. In this paper, we use graphs to combine two commonly used relations: bigram relations and dependency relations. Figure 2 shows an example of a graph. Each edge in the graph denotes either a dependency relation or a bigram relation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 272, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Building Graphs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Note that the graph we use in this paper is directed, connected, node-labeled and may contain cycles. (Hanneman and Lavie, 2009) . By contrast, dependency relations come from dependency structures which model syntactic and semantic relations between words. Phrases whose words are connected by dependency relations (also known as treelets) are linguistic-motivated and thus more reliable .", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 128, |
|
"text": "(Hanneman and Lavie, 2009)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building Graphs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "By combining these two relations together in graphs, we can make use of both continuous and linguistic-informed discontinuous phrases as long as they are connected subgraphs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building Graphs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Different from PB translation, the basic translation units in our model are subgraphs. Thus, during training, we extract subgraph-phrase pairs instead of phrase pairs on parallel graph-string sentences associated with word alignments. 1 An example of a translation rule is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "FIFA Shijiebei Juxing FIFA World Cup was held Note that the source side of a rule in our model is a graph which can be used to cover either a continuous phrase or a discontinuous phrase according to its match in an input graph during decoding. The algorithm for extracting translation rules is shown in Algorithm 1. This algorithm traverses each phrase pair s, t , which is within a length limit and consistent with a given word alignment 1 Different from translation rules in conventional syntaxbased MT, rules in our model are not learned based on synchronous grammars and so non-terminals are disallowed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1: Algorithm for extracting translation rules from a graph-string pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Data: A word-aligned graph-string pair", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(G(S), T, A) Result: A set of translation pairs R 1 for each phrase t in T : | t |\u2264 L do 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "find the minimal (may be discontinuous) phrases in S so that |s |\u2264 L and s, t is consistent with A ; (lines 1-2), and outputs G(s), t ifs is covered by a connected subgraph G(s) (lines 6-8). A source phrase can be extended with unaligned source words which are adjacent to the phrase (lines 9-14). We use a queue Q to store all phrases which are consistently aligned to the same target phrase (line 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We define our model in the log-linear framework (Och and Ney, 2002) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 67, |
|
"text": "(Och and Ney, 2002)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "over a derivation D = r 1 r 2 \u2022 \u2022 \u2022 r N , as in Equation (3): p(D) \u221d i \u03c6 i (D) \u03bb i (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where r i are translation rules, \u03c6 i are features defined on derivations and \u03bb i are feature weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In our experiments, we use the standard 9 features: two translation probabilities p(G(s)|t) and p(t|G(s)), two lexical translation probabilities p lex (s|t) and p lex (t|s), a language model lm(t) over a translation t, a rule penalty, a word penalty, an unknown word penalty and a distortion feature d for distance-based reordering. The calculation of the distortion feature d in our S s2 s1 s2 s3 1 2 3 4 5 6 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "T d 1 =2 d 2 =5 d 2 +=3 d 3 =0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Figure 3: Distortion calculation for both continuous and discontinuous phrases in a derivation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": ". model is different from the one used in conventional PB models, as we need to take discontinuity into consideration. In this paper, we use a distortion function defined in Galley and Manning (2010) to penalize discontinuous phrases that have relatively long gaps. Figure 3 shows an example of calculating distortion for discontinuous phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 199, |
|
"text": "Galley and Manning (2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 274, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our graph-based decoder is very similar to the PB decoder except that, in our decoder, each hypothesis is extended by translating an uncovered subgraph instead of a phrase. Positions covered by the subgraph are then marked as translated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Decoding", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Each derivation in our graph-based translation model implies a sequence of subgraphs (also called a segmentation). By default, similar to PB translation, our model treats each segmentation equally as shown in Equation (2). However, previous work on PB translation has suggested that such segmentations provide useful information which can improve translation performance. For example, boundary information in a phrase segmentation can be used for reordering models (Xiong et al., 2006; Cherry, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 485, |
|
"text": "(Xiong et al., 2006;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 499, |
|
"text": "Cherry, 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we are interested in directly modeling the segmentation using information from graphs. By making the assumption that each subgraph is only dependent on previous subgraphs, we define a generative process over a graph segmentation as in Equation 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(4) p(G(s 1 ) \u2022 \u2022 \u2022 G(s I )) = I i=1 P (G(s i )|G(s 1 ) \u2022 \u2022 \u2022 G(s i\u22121 ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Instead of training a stand-alone discriminative segmentation model to assign each subgraph a probability given previous subgraphs, we implement the model via sparse features, each of which is extracted at run-time during decoding and then directly added to the log-linear framework, so that these features can be tuned jointly with other features (of Section 3.3) to directly maximize the translation quality. Since a segmentation is obtained by breaking up the connectivity of an input graph, it is intuitive to use edges to model the segmentation. According to Equation (4), for a current subgraph G i , we only consider those edges which are either inside G i or connect G i with a previous subgraph. Based on these edges, we extract sparse features for each node in the subgraph. The set of sparse features is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "n.w n.c \u00d7 n .w n .c \u00d7 \uf8f1 \uf8f2 \uf8f3 C P H \uf8fc \uf8fd \uf8fe \u00d7 in out", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where n.w and n.c are the word and class of the current node n, and n .w and n .c are the word and class of a node n connected to n. C, P , and H denote that the node n is in the current subgraph G i or the adjacent previous subgraph G i\u22121 or other previous subgraphs, respectively. Note that we treat the adjacent previous subgraph differently from others since information from the last previous unit is quite useful (Xiong et al., 2006; Cherry, 2013) . in and out denote that the edge is an incoming edge or outgoing edge for the current node n. Figure 4 shows an example of extracting sparse features for a subgraph. Inspired by success in using sparse features in SMT (Cherry, 2013) , in this paper we lexicalize only on the top-100 most frequent words. In addition, we group source words into 50 classes by using mkcls which should provide useful generalization (Cherry, 2013) for our model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 439, |
|
"text": "(Xiong et al., 2006;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 453, |
|
"text": "Cherry, 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 687, |
|
"text": "(Cherry, 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 882, |
|
"text": "(Cherry, 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 549, |
|
"end": 557, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graph Segmentation Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We conduct experiments on Chinese-English (ZH-EN) and German-English (DE-EN) translation tasks. C:4 W:Nanfei C in C:4 W:Nanfei C out C:4 W:Shijiebei P out C:4 W:Juxing P in C:5 W:Zai C in C:5 W:Zai C out C:5 W:Chenggong C in C:6 W:Nanfei C out C:6 W:Juxing P in W:Zai C:5 C in W:Zai C:5 C out W:Zai C:3 P out W:Zai C:7 P in W:Nanfei C:4 C in W:Nanfei C:4 C out W:Nanfei C:6 C in W:Chenggong C:5 C out W:Chenggong C:7 P in C:4 C:5 C in C:4 C:5 C out C:4 C:3 P out C:4 C:7 P in C:5 C:4 C in C:5 C:4 C out C:5 C:6 C in C:6 C:5 C out C:6 C:7 P in Figure 4 : An illustration of extracting sparse features for each node in a subgraph during decoding. The decoder segments the graph in Figure 2 into three subgraphs (solid rectangles) and produces a complete translation by combining translations of each subgraph (dashed rectangles). In this figure, the class of a word is randomly assigned.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 551, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 679, |
|
"end": 687, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2004 (MT04) and NIST 2005 (MT05) are two test sets used to evaluate the systems. The Stanford Chinese word segmenter (Chang et al., 2008 ) is used to segment Chinese sentences. The Stanford dependency parser (Chang et al., 2009 ) parses a Chinese sentence into a projective dependency tree which is then converted to a graph by adding bigram relations. The DE-EN training corpus is from WMT 2014, including Europarl V7 and News Commentary. News-Test 2011 (WMT11) is taken as a development set while News-Test 2012 (WMT12) and News-Test 2013 (WMT13) are test sets. We use mate-tools 2 to perform morphological analysis and parse German sentences (Bohnet, 2010) . Then, MaltParser 3 converts a parse result into a projective dependency tree (Nivre and Nilsson, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "(Chang et al., 2008", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 227, |
|
"text": "(Chang et al., 2009", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 659, |
|
"text": "(Bohnet, 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 764, |
|
"text": "(Nivre and Nilsson, 2005)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we mainly report results from five systems under the same configuration. PBMT is built by the PB model in Moses (Koehn et al., 2007) . Treelet extends PBMT by taking treelets as the basic translation units . We implement a Treelet model in Moses which produces translations from left to right and uses beam search for decoding. DTU extends the PB model by allowing discontinuous phrases (Galley and Manning, 2010) . We implement DTU with source discontinuity in Moses. 4 GBMT is our basic graph-based translation system while GSM adds the graph segmentation model into GBMT. Both systems are implemented in Moses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 147, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 428, |
|
"text": "(Galley and Manning, 2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 485, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Word alignment is performed by GIZA++ (Och and Ney, 2003) with the heuristic function growdiag-final-and. We use SRILM (Stolcke, 2002) to train a 5-gram language model on the Xinhua portion of the English Gigaword corpus 5th edition with modified Kneser-Ney discounting (Chen and Goodman, 1996) . Batch MIRA (Cherry and Foster, 2012) is used to tune weights. BLEU (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2011) , and TER (Snover et al., 2006) are used for evaluation. Each score is an average over three MIRA runs (Clark et al., 2011) . * means a system is significantly better than PBMT at p \u2264 0.01. Bold figures mean a system is significantly better than Treelet at p \u2264 0.01. + means a system is significantly better than DTU at p \u2264 0.01. In this table, we mark a system by comparing it with previous ones. Table 3 shows our evaluation results. We find that our GBMT system is significantly better than PBMT as measured by all three metrics across all test sets. Specifically, the improvements are up to +1.5/+0.5 BLEU, +0.3/+0.2 METEOR, and -0.8/-0.4 TER on ZH-EN and DE-EN, respectively. This improvement is reasonable as our system allows discontinuous phrases which can reduce data sparsity and handle long-distance relations (Galley and Manning, 2010). Another argument for discontinuous phrases is that they allow the decoder to use larger translation units which tend to produce better translations (Galley and Manning, 2010) . However, this argument was only verified on ZH-EN. Therefore, we are interested in seeing whether we have the same observation in our experiments on both language pairs. We count the used translation rules in MT02 and WMT11 based on different target lengths. The results are shown in Figure 5 . We find that both DTU and GBMT indeed tend to use larger translation units on ZH-EN. However, more smaller translation units are used on DE-EN. 5 We presume this is because long-distance reordering is performed more often on ZH-EN than on DE-EN. Based on the fact that the distortion function d measures the reordering distance, we find that the average distortion value in PB on ZH-EN MT02 is 18.4 and 5 We have the same finding on all test sets. 3.5 on DE-EN WMT11. Our observations suggest that the argument that discontinuous phrases allow decoders to use larger translation units should be considered with caution when we explain the benefit of discontinuity on different language pairs. Compared to PBMT, the Treelet system does not show consistent improvements. Our system achieves significantly better BLEU and METEOR scores than Treelet on both ZH-EN and DE-EN, and a better TER score on DE-EN. This suggests that continuous phrases are essential for system robustness since it helps to improve phrase coverage (Hanneman and Lavie, 2009) . Lower phrase coverage in Treelet results in more short phrases being used, as shown in Figure 5 . In addition, we find that both DTU and our systems do not achieve consistent improvements over Treelet in terms of TER. We observed that both DTU and our systems tend to produce longer translations than Treelet, which might cause unreliable TER evaluation in our experiments as TER favours shorter sentences (He and Way, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 57, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 134, |
|
"text": "(Stolcke, 2002)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 294, |
|
"text": "(Chen and Goodman, 1996)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 333, |
|
"text": "(Cherry and Foster, 2012)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 387, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 424, |
|
"text": "(Denkowski and Lavie, 2011)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 456, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 548, |
|
"text": "(Clark et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1422, |
|
"end": 1448, |
|
"text": "(Galley and Manning, 2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1890, |
|
"end": 1891, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2149, |
|
"end": 2150, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2766, |
|
"end": 2792, |
|
"text": "(Hanneman and Lavie, 2009)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 3201, |
|
"end": 3219, |
|
"text": "(He and Way, 2010)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 823, |
|
"end": 830, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1735, |
|
"end": 1743, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 2882, |
|
"end": 2890, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Since discontinuous phrases produced by using syntactic information are fewer in number but more reliable (Koehn et al., 2003) , our GBMT system achieves comparable performance with DTU but uses significantly fewer rules, as shown in Table 4. After integrating the graph segmentation model to help subgraph selection, GBMT is further improved and the resulted system G2S has significantly better evaluation scores than DTU on both language pairs. However, our segmentation model is more helpful on DE-EN than ZH-EN. We find that the number of features learned on ZH-EN (25K+) is much less than on DE-EN (49K+). This may result in a lower feature coverage during decoding. The lower number of features in ZH-EN could be caused by the fact that the development set MT02 has many fewer sentences than WMT11. Accordingly, we suggest to use a larger development set during tuning to achieve better translation performance when the segmentation model is integrated. Our current model is more akin to addressing problems in phrase-based and treelet-based models by segmenting graphs into pieces rather than extracting a recursive grammar. Therefore, similar to those models, our model is weak at phrase reordering as well. However, we are interesting in the potential power of our model by incorporating lexical reordering (LR) models and comparing it with syntax-based models. Table 5 shows BLEU scores of the hierarchical phrase-based (HPB) system (Chiang, 2005) Table 5 : BLEU scores of a Moses hierarchical phrase-based system (HPB) and our system (GBMT) with a word-based lexical reordering model (LR).", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 126, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1443, |
|
"end": 1457, |
|
"text": "(Chiang, 2005)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1371, |
|
"end": 1378, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1458, |
|
"end": 1465, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "LR model (Koehn et al., 2005) . We find that the LR model significantly improves our system. GBMT+LR is comparable with the Moses HPB model on Chinese-English and better than HPB on German-English. Figure 6 shows three examples from MT04 to better explain the differences of each system. Example 1 shows that systems which allow discontinuous phrases (namely Treelet, DTU, GBMT, and GSM) successfully translate a Chinese collocation \"Yu . . . Wuguan\" to \"have nothing to do with\" while PBMT fails to catch the generalization since it only allows continuous phrases. In Example 2, Treelet translates a discontinuous phrase \"Dui . . . Zuofa\" (to . . . practice) only as \"to\" where an important target word \"practice\" is dropped. By contrast, bigram relations allow our systems (GBMT and GSM) to find a better phrase to translate: \"De Zuofa\" to \"of practice\". In addition, DTU translates a discontinuous phrase \"De Zuofa . . . Buman\" to \"dissatisfaction with the approach of\". However, the phrase is actually not Example 1 PBMT: the united states has indicated that the united states and north korea delegation has visited Treelet: the united states has indicated that it has nothing to do with the us delegation visited the north korea DTU: the united states has indicated that it has nothing to do with the us delegation visited north korea GBMT: the united states has indicated that it has nothing to do with the us delegation visited north korea GSM: the united states has indicated that it has nothing to do with the us delegation visited north korea PBMT: the united states government to brazil has repeatedly expressed its dissatisfaction .", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 29, |
|
"text": "(Koehn et al., 2005)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 206, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Treelet: the government of brazil to the united states has on many occasions expressed their discontent .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "DTU: the united states has repeatedly expressed its dissatisfaction with the approach of the government to brazil .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "GBMT: the us government has repeatedly expressed dissatisfaction with the practice of brazil .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "GSM: the us government has repeatedly expressed dissatisfaction with the practice of brazil . Figure 6 : Translation examples from MT04 produced by different systems. Each source sentence is annotated by dependency relations and additional bigram relations (dotted red edges). We also annotate phrase alignments produced by our system GSM.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 102, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "linguistically motivated and could be unreliable. By disallowing phrases which are not connected in the input graph, GBMT and GSM produce better translations. Example 3 illustrates that our graph segmentation model helps to select better subgraphs. After obtaining a partial translation \"the government must\", GSM chooses to translate a subgraph which covers a discontinuous phrase \"Jixu . . . Zuo\" to \"continue to make\" while GBMT translates \"Jixu Yu\" (continue . . . with) to \"continue to work together with\". By selecting the proper subgraph to translate, GSM performs a better reordering on the translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Starting from sequence-based models, SMT has been benefiting increasingly from complex structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Sequence-based MT: Since the breakthrough made by IBM on word-based models in the 1990s (Brown et al., 1993) , SMT has developed rapidly. The PB model (Koehn et al., 2003) advanced the state-of-the-art by translating multi-word units, which makes it better able to capture local phenomena. However, a major drawback in PBMT is that only continuous phrases are considered. Galley and Manning (2010) extend PBMT by allowing discontinuity. However, without linguistic structure information such as syntax trees, sequence-based models can learn a large amount of phrases which may be unreliable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 108, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 171, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 397, |
|
"text": "Galley and Manning (2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Tree-based MT: Compared to sequences, trees provide recursive structures over sentences and can handle long-distance relations. Typically, trees used in SMT are either phrasal structures (Galley et al., 2004; Marcu et al., 2006) or dependency structures Xiong et al., 2007; Xie et al., 2011; Li et al., 2014) . However, conventional treebased models only use linguistically well-formed phrases. Although they are more reliable in theory, discarding all phrase pairs which are not linguistically motivated is an overly harsh decision. Therefore, exploring more translation rules usually can significantly improve translation performance (Marcu et al., 2006; DeNeefe et al., 2007; Mi et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 208, |
|
"text": "(Galley et al., 2004;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "Marcu et al., 2006)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 273, |
|
"text": "Xiong et al., 2007;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 291, |
|
"text": "Xie et al., 2011;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 308, |
|
"text": "Li et al., 2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 656, |
|
"text": "(Marcu et al., 2006;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 657, |
|
"end": 678, |
|
"text": "DeNeefe et al., 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 679, |
|
"end": 695, |
|
"text": "Mi et al., 2008)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Graph-based MT: Compared to sequences and trees, graphs are more general and can represent more relations between words. In recent years, graphs have been drawing quite a lot of attention from researchers. Jones et al. (2012) propose a hypergraph-based translation model where hypergraphs are taken as a meaning representation of sentences. However, large corpora with annotated hypergraphs are not readily available for MT. Li et al. (2015) use an edge replacement grammar to translate dependency graphs which are converted from dependency trees by labeling edges. However, their model only focuses on subgraphs which cover continuous phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 225, |
|
"text": "Jones et al. (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 441, |
|
"text": "Li et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we extend the conventional phrasebased translation model by allowing discontinuous phrases. We use graphs which combine bigram and dependency relations together as inputs and present a graph-based translation model. Experiments on Chinese-English and German-English show our model to be significantly better than the phrase-based model as well as other more sophisticated models. In addition, we present a graph segmentation model to explicitly guide the selection of subgraphs. In experiments, this model further improves our system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the future, we will extend this model to allow discontinuity on target sides and explore the possibility of directly encoding reordering information in translation rules. We are also interested in using graphs for neural machine translation to see how it can translate and benefit from graphs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://code.google.com/p/mate-tools/ 3 http://www.maltparser.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The re-implementation of DTU in Moses makes it easier to meaningfully compare systems under the same settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For a fairer comparison, we disallow target discontinuity in HPB rules. This means that a non-terminal on the target side is either the first symbol or the last symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research has received funding from the People Programme (Marie Curie Actions) of the European Union's Framework Programme (FP7/2007-2013) under REA grant agreement n o 317471 and the European Union's Horizon 2020 research and innovation programme under grant agreement n o 645452 (QT21). The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. The authors thank all anonymous reviewers for their insightful comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Very High Accuracy and Fast Dependency Parsing is Not a Contradiction", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bohnet. 2010. Very High Accuracy and Fast Dependency Parsing is Not a Contradiction. In Pro- ceedings of the 23rd International Conference on Computational Linguistics, pages 89-97, Beijing, China, August.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", |
|
"authors": [ |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Peter F Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen A Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert L", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The Mathe- matics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263- 311.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Optimizing Chinese Word Segmentation for Machine Translation Performance", |
|
"authors": [ |
|
{ |
|
"first": "Pi-Chuan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese Word Seg- mentation for Machine Translation Performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224-232, Columbus, Ohio, June.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Discriminative Reordering with Chinese Grammatical Relations Features", |
|
"authors": [ |
|
{ |
|
"first": "Pi-Chuan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huihsin", |
|
"middle": [], |
|
"last": "Tseng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative Re- ordering with Chinese Grammatical Relations Fea- tures. In Proceedings of the Third Workshop on Syn- tax and Structure in Statistical Translation, pages 51-59, Boulder, Colorado, June.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An Empirical Study of Smoothing Techniques for Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Stanley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL '96", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "310--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Lan- guage Modeling. In Proceedings of the 34th Annual Meeting on Association for Computational Linguis- tics, ACL '96, pages 310-318, Santa Cruz, Califor- nia, June.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Batch Tuning Strategies for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "427--436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Cherry and George Foster. 2012. Batch Tun- ing Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 427-436, Montreal, Canada, June.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improved Reordering for Phrase-Based Translation using Sparse Features", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Cherry. 2013. Improved Reordering for Phrase- Based Translation using Sparse Features. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 22-31, Atlanta, Georgia, June.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Hierarchical Phrase-based Model for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263-270, Ann Arbor, Michigan, June.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "176--181", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Op- timizer Instability. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, pages 176-181, Portland, Ore- gon, June.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "What Can Syntax-Based MT Learn from Phrase-Based MT?", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Deneefe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What Can Syntax-Based MT Learn from Phrase-Based MT? In Proceedings of the 2007", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "755--763", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 755- 763, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Denkowski and Alon Lavie. 2011. Me- teor 1.3: Automatic Metric for Reliable Optimiza- tion and Evaluation of Machine Translation Sys- tems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91, Ed- inburgh, Scotland, July.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Accurate Non-hierarchical Phrase-Based Translation", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Galley and Christopher D. Manning. 2010. Accurate Non-hierarchical Phrase-Based Transla- tion. In Human Language Technologies: The 2010", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "966--974", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 966-974, Los Angeles, California, June.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "What's in a Translation Rule?", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hopkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a Translation Rule? In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT- NAACL 2004, page 273280, Boston, Massachusetts, USA, May.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Decoding with Syntactic and Non-syntactic Phrases in a Syntaxbased Machine Translation System", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Hanneman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Hanneman and Alon Lavie. 2009. Decoding with Syntactic and Non-syntactic Phrases in a Syntax- based Machine Translation System. In Proceed- ings of the Third Workshop on Syntax and Structure in Statistical Translation, pages 1-9, Boulder, Col- orado, June.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Metric and reference factors in minimum error rate training. Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "27--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yifan He and Andy Way. 2010. Metric and refer- ence factors in minimum error rate training. Ma- chine Translation, 24(1):27-38.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Semantics-Based Machine Translation with Hyperedge Replacement Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Bevan", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1359--1376", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyper- edge Replacement Grammars. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Tech- nical Papers, pages 1359-1376, Mumbai, India, December.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Statistical Phrase-Based Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Pro- ceedings of the 2003 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology -Vol- ume 1, pages 48-54, Edmonton, Canada, July.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amittai", |
|
"middle": [], |
|
"last": "Axelrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the International Workshop on Spoken Language Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "68--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Proceedings of the International Workshop on Spo- ken Language Translation 2005, pages 68-75, Pitts- burgh, PA, USA, October.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Moses: Open Source Toolkit for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Herbst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Trans- lation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177-180, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Transformation and Decomposition for Efficiently Implementing and Improving Dependency-to-String Model In Moses", |
|
"authors": [ |
|
{ |
|
"first": "Liangyou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liangyou Li, Jun Xie, Andy Way, and Qun Liu. 2014. Transformation and Decomposition for Efficiently Implementing and Improving Dependency-to-String Model In Moses. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, October.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Dependency Graph-to-String Translation", |
|
"authors": [ |
|
{ |
|
"first": "Liangyou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liangyou Li, Andy Way, and Qun Liu. 2015. De- pendency Graph-to-String Translation. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, September.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Search-Aware Tuning for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Lemao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1942--1952", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lemao Liu and Liang Huang. 2014. Search-Aware Tuning for Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1942- 1952, Doha, Qatar, October.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Treeto-string Alignment Template for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shouxun", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "609--616", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree- to-string Alignment Template for Statistical Ma- chine Translation. In Proceedings of the 21st Inter- national Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 609-616, Sydney, Australia, July.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "SPMT: Statistical Machine Translation with Syntactified Target Language Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdessamad", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "44--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical Ma- chine Translation with Syntactified Target Language Phrases. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Process- ing, pages 44-52, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Dependency Treelet Translation: The Convergence of Statistical and Example-Based Machine-translation?", |
|
"authors": [ |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Example-based Machine Translation at MT Summit X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arul Menezes and Chris Quirk. 2005. Dependency Treelet Translation: The Convergence of Statistical and Example-Based Machine-translation? In Pro- ceedings of the Workshop on Example-based Ma- chine Translation at MT Summit X, September.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Forest-Based Translation", |
|
"authors": [ |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--199", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest- Based Translation. In Proceedings of the 46th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 192-199, Columbus, Ohio, USA, June.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Pseudo-Projective Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "99--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- Projective Dependency Parsing. In Proceedings of the 43rd Annual Meeting on Association for Com- putational Linguistics, pages 99-106, Ann Arbor, Michigan, June.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "295--302", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive Training and Maximum Entropy Models for Sta- tistical Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 295-302, Philadelphia, PA, USA, July.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A Systematic Comparison of Various Statistical Alignment Models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51, March.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "The Alignment Template Approach to Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computational Linguistics", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "417--449", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The Align- ment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417- 449, December.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, July.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Dependency Treelet Translation: Syntactically Informed Phrasal SMT", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "271--279", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically In- formed Phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computa- tional Linguistics (ACL'05), pages 271-279, Ann Arbor, Michigan, June.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "A Study of Translation Edit Rate with Targeted Human Annotation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of Association for Machine Translation in the Americas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "223--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of Association for Machine Translation in the Amer- icas, pages 223-231, Cambridge, Massachusetts, USA, August.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "SRILM An Extensible Language Modeling Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the International Conference Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "901--904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke. 2002. SRILM An Extensible Lan- guage Modeling Toolkit. In Proceedings of the In- ternational Conference Spoken Language Process- ing, pages 901-904, Denver, CO.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Binarizing Syntax Trees to Improve Syntax-Based Machine Translation Accuracy", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "746--754", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Binarizing Syntax Trees to Improve Syntax-Based Machine Translation Accuracy. In Proceedings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 746-754, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "A Novel Dependency-to-string Model for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Xie, Haitao Mi, and Qun Liu. 2011. A Novel Dependency-to-string Model for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 216-226, Edinburgh, United Kingdom, July.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shouxun", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "521--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Max- imum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the As- sociation for Computational Linguistics, pages 521- 528, Sydney, Australia, July.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "A Dependency Treelet String Correspondence Model for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shouxun", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deyi Xiong, Qun Liu, and Shouxun Lin. 2007. A De- pendency Treelet String Correspondence Model for Statistical Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Trans- lation, pages 40-47, Prague, June.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "word s i adjacent tos do 11s = extends with s i ;", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Phrase Length Histogram for MT02 and WMT11.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "The number of sentences in our corpora.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td/><td/><td/><td/><td>held</td></tr><tr><td/><td/><td/><td/><td>Juxing</td></tr><tr><td colspan=\"2\">FIFA</td><td>World Cup</td><td>in</td><td>successfully</td></tr><tr><td colspan=\"2\">FIFA</td><td>Shijiebei</td><td>Zai</td><td>Chenggong</td></tr><tr><td>2010</td><td/><td/><td colspan=\"2\">South Africa</td></tr><tr><td>2010Nian</td><td/><td>r2</td><td/><td>Nanfei</td></tr><tr><td>r1</td><td/><td/><td/><td>r3</td></tr><tr><td>2010</td><td colspan=\"4\">FIFA World Cup was held</td><td>successfully in South Africa</td></tr><tr><td>Sparse features for r3:</td><td/><td/><td/></tr><tr><td>W:Zai W:Nanfei C in</td><td/><td/><td/></tr><tr><td>W:Zai W:Nanfei C out</td><td/><td/><td/></tr><tr><td>W:Zai W:Shijiebei P out</td><td/><td/><td/></tr><tr><td>W:Zai W:Juxing P in</td><td/><td/><td/></tr><tr><td>W:Nanfei W:Zai C in</td><td/><td/><td/></tr><tr><td>W:Nanfei W:Zai C out</td><td/><td/><td/></tr><tr><td colspan=\"2\">W:Nanfei W:Chenggong C in</td><td/><td/></tr><tr><td colspan=\"2\">W:Chenggong W:Nanfei C out</td><td/><td/></tr><tr><td colspan=\"2\">W:Chenggong W:Juxing P in</td><td/><td/></tr></table>", |
|
"text": "provides a summary of our corpra. Our ZH-EN training corpus contains 1.5M+ sentences from LDC. NIST 2002 (MT02) is taken as a development set to tune weights, and NIST", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"text": "The number of rules in DTU and GBMT.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td colspan=\"2\">american government</td><td>said</td><td colspan=\"2\">with</td><td>visit</td><td colspan=\"2\">north korea</td><td>of</td><td>american</td><td>delegation</td><td>no tie</td></tr><tr><td>Meiguo</td><td>Zhengfu</td><td>Biaoshi</td><td>Yu</td><td/><td>Zoufang</td><td>BeiHan</td><td/><td>De</td><td>Meiguo</td><td>Daibiaotuan Wuguan</td></tr><tr><td colspan=\"4\">the united states has indicated that it</td><td colspan=\"3\">has nothing to do with</td><td colspan=\"3\">the us delegation</td><td>visited</td><td>north korea</td></tr><tr><td>Example 2</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "REF:the american government said that it has nothing to do with the american delegation to visit north korea", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table><tr><td>US</td><td colspan=\"2\">government</td><td>to</td><td>Brazil</td><td>of</td><td colspan=\"7\">practice already many times express dissatisfaction .</td></tr><tr><td>Meiguo</td><td>Zhengfu</td><td/><td>Dui</td><td>Baxi</td><td>De</td><td>Zuofa</td><td>Yijing</td><td>Duo</td><td>Ci</td><td>Biaoshi</td><td/><td>Buman</td><td>.</td></tr><tr><td colspan=\"3\">the us government</td><td colspan=\"2\">has repeatedly</td><td colspan=\"4\">expressed dissatisfaction with</td><td colspan=\"2\">the practice of</td><td>brazil</td><td>.</td></tr><tr><td>Example 3</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">PBMT: the govern-</td><td colspan=\"3\">Treelet: the govern-</td><td/><td colspan=\"2\">DTU: the govern-</td><td colspan=\"3\">GBMT: the govern-</td><td colspan=\"2\">GSM: the govern-</td></tr><tr><td colspan=\"2\">ment and all sectors</td><td colspan=\"3\">ment must continue</td><td/><td colspan=\"2\">ment must continue</td><td colspan=\"3\">ment must continue</td><td colspan=\"2\">ment must continue</td></tr><tr><td colspan=\"2\">of society should</td><td colspan=\"4\">to make in-depth dis-</td><td colspan=\"2\">to work together with</td><td colspan=\"3\">to work together with</td><td colspan=\"2\">to make in-depth dis-</td></tr><tr><td colspan=\"2\">continue to explore</td><td colspan=\"4\">cussions with various</td><td colspan=\"2\">various sectors of the</td><td colspan=\"3\">various sectors of the</td><td colspan=\"2\">cussions with various</td></tr><tr><td colspan=\"2\">in depth and draw on</td><td colspan=\"3\">sectors of the com-</td><td/><td colspan=\"2\">community to make</td><td colspan=\"3\">community in-depth</td><td colspan=\"2\">sectors of the com-</td></tr><tr><td colspan=\"2\">collective wisdom .</td><td colspan=\"3\">munity and the col-</td><td/><td colspan=\"2\">an in-depth study and</td><td colspan=\"3\">study and draw on</td><td colspan=\"2\">munity and draw on</td></tr><tr><td/><td/><td colspan=\"3\">lective wisdom .</td><td/><td colspan=\"2\">draw on collective</td><td colspan=\"3\">collective wisdom .</td><td colspan=\"2\">collective wisdom .</td></tr><tr><td/><td/><td/><td/><td/><td/><td>wisdom .</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Zhengfu Wubi Jixu</td><td colspan=\"3\">Yu Shehui Ge</td><td>Jie</td><td colspan=\"4\">Zuo Shengru Taolun , Jisi</td><td/><td>Guangyi</td><td>.</td></tr><tr><td colspan=\"9\">the government must continue to make in-depth discussions with</td><td colspan=\"2\">various sectors of the community</td><td colspan=\"2\">and draw on collective wisdom .</td></tr></table>", |
|
"text": "REF:the us government has expressed their resentment against this practice of brazil on many occasions .REF:the government must continue to hold thorough discussions with all walks of life to pool the wisdom of the masses . government must continue with society each community make in-depth discussion , draw collective wisdom .", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |