|
{ |
|
"paper_id": "C10-1009", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:59:10.909654Z" |
|
}, |
|
"title": "Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices", |
|
"authors": [ |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Blackwood", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.", |
|
"pdf_parse": { |
|
"paper_id": "C10-1009", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Translation quality is often described in terms of fluency and adequacy. Fluency reflects the 'nativeness' of the translation while adequacy indicates how well a translation captures the meaning of the original text (Ma and Cieri, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 236, |
|
"text": "(Ma and Cieri, 2006)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "From a purely utilitarian view, adequacy should be more important than fluency. But fluency and adequacy are subjective and not easy to tease apart (Callison-Burch et al., 2009; Vilar et al., 2007) . There is a human tendency to rate less fluent translations as less adequate. One explanation is that errors in grammar cause readers to be more critical. A related phenomenon is that the nature of translation errors changes as fluency improves so that any errors in fluent translations must be relatively subtle. It is therefore not enough to focus solely on adequacy. SMT systems must also be fluent if they are to be accepted and trusted. It is possible that the reliance on automatic metrics may have led SMT researchers to pay insufficient attention to fluency: BLEU (Papineni et al., 2002) , TER (Snover et al., 2006) , and METEOR (Lavie and Denkowski, 2009) show broad correlation with human rankings of MT quality, but are incapable of fine distinctions between fluency and adequacy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 177, |
|
"text": "(Callison-Burch et al., 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 197, |
|
"text": "Vilar et al., 2007)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 794, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 822, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 836, |
|
"end": 863, |
|
"text": "(Lavie and Denkowski, 2009)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is concern that the fluency of current SMT is inadequate (Knight, 2007b) . SMT is robust, in that a translation is nearly always produced. But unlike translators who should be skilled in at least one of the languages, SMT systems are limited in both source and target language competence. Fluency and accuracy therefore tend to suffer together as translation quality degrades. This should not be the case. Ideally, an SMT system should never be any less fluent than the best stochastic text generation system available in the target language (Oberlander and Brew, 2000) . What is needed is a good way to enhance the fluency of SMT hypotheses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 78, |
|
"text": "(Knight, 2007b)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 575, |
|
"text": "(Oberlander and Brew, 2000)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The maximum likelihood (ML) formulation (Brown et al., 1990) of translation of source language sentence F to target language sentence\u00ca", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 60, |
|
"text": "(Brown et al., 1990)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E = argmax E P (F |E)P (E)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "makes it clear why improving SMT fluency is a difficult modelling problem. The language model P (E), the closest thing to a 'fluency component' in the original formulation, only affects candidates likely under the translation model P (F |E). Given the weakness of current translation models this is a severe limitation. It often happens that SMT systems assign P (F |\u0112) = 0 to a correct reference translation\u0112 of F (see the discussion in Section 9). The problem is that in ML decoding the language model can only encourage the production of fluent translations; it cannot easily enforce constraints on fluency or introduce new hypotheses. In Hiero (Chiang, 2007) and syntax-based SMT (Knight and Graehl, 2005; Knight, 2007a) , the primary role of syntax is to drive the translation process. Translations produced by these systems respect the syntax of their translation models, but this does not force them to be grammatical in the way that a typical human sentence is grammatical; they produce many translations which are not fluent. The problem is robustness. Generating fluent translations demands a tightly constraining target language grammar but such a grammar is at odds with broad-coverage parsing needed for robust translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 648, |
|
"end": 662, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 709, |
|
"text": "(Knight and Graehl, 2005;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 724, |
|
"text": "Knight, 2007a)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have described two problems in translation fluency: (1) SMT may fail to generate fluent hypotheses and there is no simple way to introduce them into the search; (2) SMT produces many translations which are not fluent but enforcing constraints to improve fluency can hurt robustness. Both problems are rooted in the ML decoding framework in which robustness and fluency are conflicting objectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a novel framework to improve the fluency of any SMT system, whether syntactic or phrase-based. We will perform Minimum Bayesrisk search (Kumar and Byrne, 2004) over a space of fluent hypotheses H:", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 170, |
|
"text": "(Kumar and Byrne, 2004)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E MBR = argmin E \u2032 \u2208H E\u2208E L(E, E \u2032 )P (E|F )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this approach the MBR evidence space E is generated by an SMT system as a k-best list or lattice. The system runs in its best possible configuration, ensuring both translation robustness and good baselines. Rather than decoding in the output of the baseline SMT system, translations will be sought among a collection of fluent sentences that are close to the top SMT hypotheses as determined by the loss function L(E, E \u2032 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Decoupling the MBR hypothesis space from first-pass translation offers great flexibility. Hypotheses in H may be arbitrarily constrained according to lexical, syntactic, semantic, or other considerations, with no effect on translation robustness. This is because constraints on fluency do not affect the production of the evidence space by the baseline system. Robustness and fluency are no longer conflicting objectives. This framework also allows the MBR hypothesis space to be augmented with hypotheses produced by an NLG system, although this is beyond the scope of the present paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper focuses on searching out fluent strings amongst the vast number of hypotheses encoded in SMT lattices. Oracle BLEU scores computed over k-best lists (Och et al., 2004) show that many high quality hypotheses are produced by first-pass SMT decoding. We propose reducing the difficulty of enhancing the fluency of complete hypotheses by first identifying regions of highconfidence in the ML translations and using these to guide the fluency refinement process. This has two advantages: (1) we keep portions of the baseline hypotheses that we trust and search for alternatives elsewhere, and (2) the task is made much easier since the fluency of sentence fragments can be refined in context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 178, |
|
"text": "(Och et al., 2004)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In what follows, we use posterior probabilities over SMT lattices to identify useful subsequences in the ML translations (Sections 2 & 3). These subsequences drive the segmentation and transformation of lattices into smaller subproblems (Sections 4 & 5). Subproblems are mined for fluent strings (Section 6), resulting in improved translation fluency (Sections 7 & 8). Our results show that, when guided by the careful selection of subproblems, fluency can be improved with no real degradation of the BLEU score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The formulation of the MBR decoder in Equation (2) separates the hypothesis space from the evidence space. We apply the linearised lattice MBR decision rule (Tromble et al., 2008) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 179, |
|
"text": "(Tromble et al., 2008)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice MBR Decoding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E LMBR = argmax E \u2032 \u2208H \u03b8 0 |E \u2032 |+ u\u2208N \u03b8 u # u (E \u2032 )p(u|E) , (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice MBR Decoding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where H is the hypothesis space, E is the evidence space, N is the set of all n-grams in H (typically, n = 1 . . . 4), and \u03b8 are constants estimated on held-out data. The quantity p(u|E) is the path posterior probability of n-gram u", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice MBR Decoding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(u|E) = E\u2208Eu P (E|F ),", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Lattice MBR Decoding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where E u = {E \u2208 E : # u (E) > 0} is the subset of paths containing n-gram u at least once. The path posterior probabilities p(u|E) of Equation (4) can be efficiently calculated (Blackwood et al., 2010) using general purpose WFST operations (Mohri et al., 2002) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 202, |
|
"text": "(Blackwood et al., 2010)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 261, |
|
"text": "(Mohri et al., 2002)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice MBR Decoding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the formulation of Equations (3) and (4) the path posterior n-gram probabilities play a crucial role. MBR decoding under the linear approximation to BLEU is driven mainly by the presence of high posterior n-grams in the lattice; the low posterior n-grams contribute relatively little to the MBR decision criterion. Here we investigate the predictive power of these statistics. We will show that the n-gram posterior is a good predictor as to whether or not an n-gram is to be found in a set of reference translations. Let N n denote the set of n-grams of order n in the ML hypothesis\u00ca, and let R n denote the set of n-grams of order n in the union of the references. For confidence threshold \u03b2, let N n,\u03b2 = {u \u2208 N n : p(u|E) \u2265 \u03b2} denote the n-grams in N n with posterior probability greater than or equal to \u03b2, where p(u|E) is computed using Equation (4). This is equivalent to identifying all substrings of length n in the translation hypotheses for which the system assigns a posterior probability of \u03b2 or higher. The precision at order n for threshold \u03b2 is the proportion of n-grams in N n,\u03b2 also present in the references:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P n,\u03b2 = |R n \u2229 N n,\u03b2 | |N n,\u03b2 |", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The left plot in Figure 1 shows average persentence n-gram precisions P n,\u03b2 at orders 1. . .4 for an Arabic\u2192English translation task at a range of thresholds 0 \u2264 \u03b2 \u2264 1. Sentence start and end tokens are ignored when computing unigram precisions. We note that precision at all orders improves as the threshold \u03b2 increases. This confirms that these intrinsic measures of translation confidence have strong predictive power.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 25, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The right-hand side of the figure shows the average number of n-grams per sentence for the same range of \u03b2. We see that for high \u03b2, there are few n-grams with p(u|E) \u2265 \u03b2; this is as expected. However, even at a high threshold of \u03b2 = 0.9 there are still on average three 4-grams per sentence with posterior probabilities that exceed \u03b2. Even at this very high confidence level, high posterior n-grams occur frequently enough that we can expect them to be useful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "These precision results motivate our use of path posterior n-gram probabilities as a confidence measure. We assign confidence p(\u00ca j i |E) to sub-sequences\u00ca i . . .\u00ca j of the ML hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Prior work focuses on word-level confidence extracted from k-best lists and lattices (Ueffing and Ney, 2007) , while Zens and Ney (2006) rescore k-best lists with n-gram posterior probabilities. Similar experiments with a slightly different motivation are reported by DeNero et al. (2009) ; they show that expected n-gram counts in a lattice can be used to predict which n-grams appear in the references.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 108, |
|
"text": "(Ueffing and Ney, 2007)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "Zens and Ney (2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 288, |
|
"text": "DeNero et al. (2009)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Probability Confidence Measures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We have shown that current SMT systems, although flawed, can identify with confidence par-the newspaper \" constitution \" quoted brigadier abdullah krishan , the chief of police in karak governorate ( 521 km south @-@ west of amman ) as saying that the seizure took place after police received information that there were attempts by the group to sell for more than $ 100 thousand dollars , the police rushed to the arrest in possession . tial hypotheses that can be trusted. We wish to constrain MBR decoding to include these trusted partial hypotheses but allow decoding to consider alternatives in regions of low confidence. In this way we aim to improve the best possible output of the best available systems. We use the path posterior n-gram probabilities of Equation (4) to segment lattice E into regions of high and low confidence. As shown in the example of Figure 2 , the lattice segmentation process is performed relative to the ML hypothesis\u00ca, i.e. relative to the best path through E.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 865, |
|
"end": 873, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For confidence threshold \u03b2, we find all 4-grams u =\u00ca i , . . . ,\u00ca i+3 in the ML translation for which p(u|E) > \u03b2. We then segment\u00ca into regions of high and low confidence where the high confidence regions are identified by consecutive, overlapping high confidence 4-grams. The high confidence regions are contiguous strings of words for which there is consensus amongst the translations in the lattice. If we trust the path posterior n-gram probabilities, any hypothesised translation should include these high confidence substrings. This approach differs from simple posterior-based pruning in that we discard paths, rather than words or n-grams, which are not consistent with highconfidence regions of the ML hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The hypothesis string\u00ca is in this way segmented into R alternating subsequences of high and low confidence. The segment boundaries are i r and j r so that\u00ca jr ir is either a high confidence or a low confidence subsequence. Each subsequence is associated with an unweighted subspace H r ; this subspace has the form of a string for high confidence regions and the form of a lattice for low confidence regions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "If the r th segment is a high confidence region then H r accepts only the string\u00ca jr ir . If the r th segment is a region of low confidence, then H r is built to accept relevant substrings from E. It is constructed as follows. The r th low confidence region\u00ca jr ir has a high confidence left context\u00ea r\u22121 and a high confidence right context\u00ea r+1 formed from subsequences of the ML translation hypoth-esis\u00ca as\u00ea", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "r\u22121 =\u00ca j r\u22121 i r\u22121 ,\u00ea r+1 =\u00ca j r+1 i r+1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Note that when r = 1 the left context\u00ea r\u22121 is the empty string and when r = R the right context e r+1 is the empty string. We build a transducer T r for the regular expression /. * \u00ea r\u22121 (. * )\u00ea r+1 . * /\\1/. 1 Composition with E yields H r = E \u2022T r , so that H r contains all the reasonable alternatives t\u00f4 E jr ir in E consistent with the high confidence left and right contexts\u00ea r\u22121 and\u00ea r+1 . If H r is aligned to a high confidence subsequence of\u00ca, we call it a string region since it contains a single path; if it is aligned to a low confidence region it is a lattice and we call it a sublattice region. The series of high and low confidence subspace regions H 1 , . . . , H R defines the lattice segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We now describe a general framework for improving the fluency of the MBR hypothesis space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The segmentation of the lattice described in Section 4 considerably simplifies the problem of improving the fluency of its hypotheses since each region of low confidence may be considered independently. The low confidence regions can be transformed one-by-one and then reassembled to form a new MBR hypothesis space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In order to transform the hypothesis region H r it is important to know the context in which it occurs, i.e. the sequences of words that form its prefix and suffix. Some transformations might need only a short context; others may need a sentencelevel context, i.e. the full sequence of ML word\u015d E j r\u22121 1 and\u00ca N i r+1 to the left and right of the region H r that is to be transformed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To put this formally, each low confidence sublattice region is transformed by the application of some function \u03a8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H r \u2190 \u03a8(\u00ca j r\u22121 1 , H r ,\u00ca N i r+1 )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The hypothesis space is then constructed from the concatenation of high confidence string and transformed low confidence sublattice regions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H = E \u2022 1\u2264r\u2264R H r", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The composition with the original lattice E discards any new hypotheses that might be created via the unconstrained concatenation of strings from the H r . It may be that in some circumstances the introduction of new paths is good, but in what follows we test the ability to improve fluency by searching among existing hypotheses, and this ensures that nothing new is introduced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Space Construction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "If no new hypotheses are introduced by the operations \u03a8, the size of the hypothesis space H is determined by the posterior probability threshold \u03b2. Only the ML hypothesis remains at \u03b2 = 0, since all its subsequences are of high confidence, i.e. can be covered by n-grams with non-zero path posterior probability. At the other extreme, for \u03b2 = 1, it follows that H = E and no paths are removed, since any string regions created are formed from subsequences that occur on every path in E. We can therefore use \u03b2 to tighten or relax constraints on the LMBR hypothesis space. At \u03b2 = 0, LMBR returns only the ML hypothesis; at \u03b2 = 1, LMBR is done over the full translation lattice. This is shown in Table 1 , where the BLEU score approaches the BLEU score of unconstrained LMBR as \u03b2 increases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 694, |
|
"end": 701, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Size of the Hypothesis Space", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note also that the size of the resulting hypothesis space is the product of the number of sequences in the sublattice regions. For Figure 2 at \u03b2 = 0.8, this product is \u223c5.4 billion hypotheses. Even for fairly aggressive constraints on the hypothesis space, many hypotheses remain.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 139, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Size of the Hypothesis Space", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This section describes one implementation of the transformation function \u03a8 that we will show leads to improved fluency of machine translation output. This transformation is based on n-gram coverage in a large target language text collection: where possible, we filter the sublattice regions so that they contain only long-span n-grams observed in the text. Our motivation is that large monolingual text collections are good guides to fluency. If a hypothesis is composed entirely of previously seen high order n-grams, it is likely to be fluent and should be favoured.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Coverage Constraints", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Initial attempts to identify fluent hypotheses in sublattice regions by ranking according to n-gram LM scores were ineffective. Figure 3 shows the difficulties. We see that both the 4-gram Kneser-Ney and 5-gram stupid-backoff language models Figure 4 : Improved fluency through the application of monolingual coverage constraints to the hypothesis space in MBR decoding of NIST MT 08 Arabic\u2192English newswire lattices. of the odd numbered sentences of the MT02-MT05 testsets; the even numbered sentences form test. MT08 performance on nw08 (newswire) and ng08 (newsgroup) data is also reported.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 136, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 250, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Monolingual Coverage Constraints", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "First-pass translation is performed using HiFST (Iglesias et al., 2009) , a hierarchical phrase-based decoder. The first-pass LM is a modified Kneser-Ney (Kneser and Ney, 1995) 4-gram estimated over the English side of the parallel text and an 881M word subset of the English GigaWord 3rd Edition. Prior to LMBR, the first-pass lattices are rescored with zero-cutoff stupid-backoff 5-gram language models (Brants et al., 2007) estimated over more than 6B words of English text. The LMBR factors \u03b8 0 , . . . , \u03b8 4 are set as in Tromble et al. (2008) using unigram precision p = 0.85 and recall ratio r = 0.74.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 71, |
|
"text": "(Iglesias et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 176, |
|
"text": "Kneser-Ney (Kneser and Ney, 1995)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 426, |
|
"text": "(Brants et al., 2007)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 548, |
|
"text": "Tromble et al. (2008)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Coverage Constraints", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The effect of performing LMBR over the segmented hypothesis space is shown in Table 1 . The hypothesis subspaces H r are constructed at various confidence thresholds as described in Section 4 with H formed via Equation (7); no coverage constraints are applied yet. Constraining the search space using \u03b2 = 0.6 leads to little degradation in LMBR performance under BLEU. This shows lattice segmentation works as intended.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Monolingual Coverage Constraints", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We next investigate the effect of monolingual coverage constraints on BLEU. We build acceptors C n as described in Section 6 with S consisting of all n-grams in the English GigaWord. At \u03b2 = 0.6 we found 181 sentences with sublattices H r spanned by maximum order n-grams from S, i.e. for which X r \u2022 C n have paths with cost 0; these are filtered as described. LMBR over these coverage-constrained sublattices is denoted LMBR+CC. On nw08 the BLEU score for LMBR+CC is 52.0 which is +0.7 over the ML decoder and only -0. has little impact on BLEU. At this value of \u03b2, 116 of the 813 nw08 sentences have a low confidence region (1) completely covered by 5-grams, and (2) within which the ML hypothesis and the LMBR+CC hypothesis differ. It is these regions which we will inspect for improved fluency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Coverage Constraints", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We asked 17 native speakers to judge the fluency of sentence fragments from nw08. We compared hypotheses from the ML and the LMBR+CC decoders. Each fragment consisted of the partial translation hypothesis from a low confidence region together with its left and right high confidence contexts (examples given in Figure 4 ). For each sample, judges were asked: \"Could this fragment occur in a fluent sentence?\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 319, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Fluency Evaluation", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The results are shown in Table 2 . Most of the time, the ML and LMBR+CC sentence fragments were both judged to be fluent; it often happened that they differed by only a single noun or verb substitution which didn't affect fluency. In a small number of cases, both ML and LMBR+CC were judged to be disfluent. We are most interested in the 'off-diagonal' cases. In cases when one system was judged to be fluent and the other was not, LMBR+CC was preferred about twice as often as the ML baseline (26.9% to 9.7%). In other words, the monolingual fluency constraints were judged to have improved the fluency of the low confidence region more than twice as often as a fluent hypothesis was made disfluent. Some examples of improved fluency are shown in Figure 4 . Although both the ML and unconstrained LMBR hypotheses might satisfy adequacy, they lack the fluency of the LMBR+CC hypotheses generated using monolingual fluency constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 748, |
|
"end": 756, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Fluency Evaluation", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We have described a general framework for improving SMT fluency. Decoupling the hypothesis space from the evidence space allows for much greater flexibility in lattice MBR search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We have shown that high path posterior probability n-grams in the ML translation can be used to guide the segmentation of a lattice into regions of high and low confidence. Segmenting the lattice simplifies the process of refining the hypothesis space since low confidence regions can be refined in the context of their high confidence neighbours. This can be done independently before reassembling the refined regions. Lattice segmentation facilitates the application of post-processing and rescoring techniques targeted to address particular deficiencies in ML decoding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "The techniques we presented are related to consensus decoding and system combination for SMT (Matusov et al., 2006; Sim et al., 2007) , and to segmental MBR for automatic speech recognition (Goel et al., 2004) . Mohit et al. (2009) describe an alternative approach to improving specific portions of translation hypotheses. They use an SVM classifier to identify a single phrase in each source language sentence that is \"difficult to translate\"; such phrases are then translated using an adapted language model estimated from parallel data. In contrast to their approach, our approach is able to exploit large collections of monolingual data to refine multiple low confidence regions using posterior probabilities obtained from a high-quality evidence space of first-pass translations. Reachability tune 2075 15% test 2040 14% nw08 813 11% ng08 547 9% Table 3 : Arabic\u2192English reference reachability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 115, |
|
"text": "(Matusov et al., 2006;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 133, |
|
"text": "Sim et al., 2007)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 209, |
|
"text": "(Goel et al., 2004)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 231, |
|
"text": "Mohit et al. (2009)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 785, |
|
"end": 853, |
|
"text": "Reachability tune 2075 15% test 2040 14% nw08 813 11% ng08", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 861, |
|
"end": 868, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summary and Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We applied hypothesis space constraints based on monolingual coverage to low confidence regions resulting in improved fluency with no real degradation in BLEU score relative to unconstrained LMBR decoding. This approach is limited by the coverage of sublattices using monolingual text. We expect this to improve with larger text collections or in tightly focused scenarios where in-domain text is less diverse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testset Sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, fluency will be best improved by integrating more sophisticated natural language generation. NLG systems capable of generating sentence fragments in context can be incorporated directly into this framework. If the MBR hypothesis space H contains a generated hypothesis\u0112 for which P (F |\u0112) = 0,\u0112 could still be produced as a translation, since it can be 'voted for' by nearby hypotheses produced by the underlying system. Table 3 shows the proportion of NIST testset sentences that can be aligned to any of the reference translations using our high quality baseline hierarchical decoder with a powerful grammar. The low level of reachability suggests that NLG may be required to achieve high levels of translation quality and fluency. Other rescoring approaches (Kumar et al., 2009; Li et al., 2009) may also benefit from NLG when the baseline is incapable of generating the reference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 790, |
|
"text": "(Kumar et al., 2009;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 791, |
|
"end": 807, |
|
"text": "Li et al., 2009)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 437, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Testset Sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We note that our approach could also be used to improve the fluency of ASR, OCR and other language processing tasks where the goal is to produce fluent natural language output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testset Sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this notation parentheses indicate string matches so that /. * y(a * )w. * /\\1/ applied to xyaaawzz yields aaa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Matt Gibson and the human judges who participated in the evaluation. This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022 and the European Union Seventh Framework Programme (FP7-ICT-2009-4) under Grant Agreement No. 247762.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Translation hypothesis E and n-gram orders used by the LM to score each word Score 4g <s>1 the2 reactor3 produces3 plutonium2 needed2 to3 manufacture4 atomic3 bomb2 .3 </s>4 -22.59 <s>1 the2 reactor3 produces3 plutonium2 needed2 to3 manufacture4 the4 atomic2 bomb3 .4 </s>4 -23.61 5g<s>1 the2 reactor3 produces4 plutonium5 needed3 to3 manufacture4 atomic5 bomb2 .3 </s>4 -16.04 <s>1 the2 reactor3 produces4 plutonium5 needed3 to3 manufacture4 the4 atomic4 bomb5 .4 </s>5 -17.96Figure 3: Scores and n-gram orders for hypotheses using 4-gram Kneser-Ney and 5-gram stupidbackoff (estimated from 1.1B and 6.6B tokens, resp.) LMs. Low confidence regions are in italics.favour the shorter but disfluent hypothesis; normalising by length was not effective. However, the stupid-backoff LM has better coverage and the backing-off behaviour is a clue to the presence of disfluency. Similar cues have been observed in ASR analysis (Chase, 1997) . The shorter hypothesis backs off to a bigram for \"atomic bomb\", whereas the longer hypothesis covers the same words with 4-grams and 5-grams. We therefore disregard the language model scores and focus on n-gram coverage. This is an example where robustness and fluency are at odds. The n-gram models are robust, but often favour less fluent hypotheses.Let S denote the set of all n-grams in the monolingual training data. To identify partial hypotheses in sublattice regions that have complete monolingual coverage at the maximum order n, we build a coverage acceptor C n with a similar form to the WFST representation of an n-gram backoff language model (Allauzen et al., 2003) . C n assigns a penalty to every n-gram not found in S. In C n word arcs have no cost and backoff arcs are assigned a fixed cost of 1. Firstly, arcs from the start state are added for each unigram w \u2208 N 1 :and target word w n , arcs are addedif u has order n and h + = w n 1 if u has order less than n. Backoff arcs are added for each u aswhere h \u2212 = w n\u22121 2 if u has order > 2, and bigrams backoff to the null history start state \u2205.For each sublattice region H r , we wish to penalise each path proportionally to the number of its n-grams not found in the monolingual text collection S. We wish to do this in context, so that we include the effect of the neighbouring high confidence regions H r\u22121 and H r+1 . Given that we are counting n-grams at order n we form the left context machine L r which accepts the last n \u2212 1 words in H r\u22121 ; similarly, R r accepts the first n \u2212 1 words of H r+1 . The concatenation X r = L r \u2297 H r \u2297 R r represents the partial translation hypotheses in H r padded with n \u2212 1 words of left and right context from the neighbouring high confidence regions. Composing X r \u2022 C n assigns each partial hypothesis a cost equal to the number of times it was necessary to back off to lower order n-grams while reading the string. Partial hypotheses with cost 0 did not back off at all and contain only maximum order n-grams.In the following experiments, we look at each X n \u2022 C n and if there are paths with cost 0, only these are kept and all others discarded. We introduce this as a constraint on the hypothesis space which we will evaluate for improvement on fluency. Here the transformation function \u03a8 returns H r as X r \u2022 C n after pruning. If X r \u2022 C n has no zero cost paths, the transformation function \u03a8 returns H r as we find it, since there is not enough monolingual coverage to guide the selection of fluent hypotheses. After applying monolingual coverage constraints to each region, the modified hypothesis space used for MBR search is formed by concatenation using Equation (7).We note that C n is a simplistic NLG system. It generates strings by concatenating n-grams found in S. We do not allow it to run 'open loop' in these experiments, but instead use it to find the strings in X r with good n-gram coverage.", |
|
"cite_spans": [ |
|
{ |
|
"start": 920, |
|
"end": 933, |
|
"text": "(Chase, 1997)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1591, |
|
"end": 1614, |
|
"text": "(Allauzen et al., 2003)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The effect of fluency constraints on LMBR decoding is evaluated in the context of the NIST Arabic\u2192English MT task. The set tune consists", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LMBR Over Segmented Lattices", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "view , especially with the open chinese economy to the world and", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ml", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ML ... view , especially with the open chinese economy to the world and ...", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "view , especially with the open chinese economy to the world and", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "+lmbr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "+LMBR ... view , especially with the open chinese economy to the world and ...", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "view , especially with the opening of the chinese economy to the world and", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "+lmbr+cc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "+LMBR+CC ... view , especially with the opening of the chinese economy to the world and ...", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "revision of the constitution of the japanese public , which dates back", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ml", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ML ... revision of the constitution of the japanese public , which dates back ...", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "revision of the constitution of the japanese public , which dates back", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "+lmbr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "+LMBR ... revision of the constitution of the japanese public , which dates back ...", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "revision of the constitution of japan , which dates back", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "+lmbr+cc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "+LMBR+CC ... revision of the constitution of japan , which dates back ...", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Generalized algorithms for constructing statistical language models", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "References Allauzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "References Allauzen, Cyril, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical lan- guage models. In Proceedings of ACL 2003.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Efficient path counting transducers for minimum Bayes-risk decoding of statistical machine translation lattices", |
|
"authors": [ |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Blackwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blackwood, Graeme, Adri\u00e0 de Gispert, and William Byrne. 2010. Efficient path counting transducers for minimum Bayes-risk decoding of statistical machine translation lat- tices. In Proceedings of ACL 2010.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Large language models in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashok", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Popat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brants, Thorsten, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in ma- chine translation. In Proceedings of the EMNLP 2007.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A statistical approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fredrick", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Roossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, Peter F., John Cocke, Stephen A. Della Pietra, Vin- cent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A sta- tistical approach to machine translation. Computational Linguistics, 16(2):79-85.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Findings of the 2009 Workshop on Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josh", |
|
"middle": [], |
|
"last": "Schroeder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Callison-Burch, Chris, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 Workshop on Statistical Machine Translation. In WMT 2009.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Error-responsive feedback mechanisms for speech recognizers", |
|
"authors": [ |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Chase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lawrance", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chase, Lin Lawrance. 1997. Error-responsive feed- back mechanisms for speech recognizers, Ph.D. Thesis, Carnegie Mellon University.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Hierarchical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "201--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chiang, David. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Fast consensus decoding over translation forests", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Denero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DeNero, John, David Chiang, and Kevin Knight. 2009. Fast consensus decoding over translation forests. In Proceed- ings of ACL-IJCNLP 2009.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Segmental minimum Bayes-risk decoding for automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "234--249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goel, V., S. Kumar, and W. Byrne. 2004. Segmental mini- mum Bayes-risk decoding for automatic speech recogni- tion. IEEE Transactions on Speech and Audio Process- ing, 12:234-249.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Hierarchical phrase-based translation with weighted finite state transducers", |
|
"authors": [ |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Iglesias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Banga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Annual Conference of the NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iglesias, Gonzalo, Adri\u00e0 de Gispert, Eduardo R. Banga, and William Byrne. 2009. Hierarchical phrase-based trans- lation with weighted finite state transducers. In Proceed- ings of the 2009 Annual Conference of the NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improved backing-off for m-gram language modeling", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kneser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kneser, R. and H. Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "An overview of probabilistic tree transducers for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of CICLING 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knight, K and J Graehl. 2005. An overview of probabilis- tic tree transducers for natural language processing. In Proceedings of CICLING 2005.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Capturing practical natural language transformations. Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knight, K. 2007a. Capturing practical natural language transformations. Machine Translation, 21(2).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatic language translation generation help needs badly", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "MT Summit XI Workshop on Using Corpora for NLG: Keynote Address", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knight, Kevin. 2007b. Automatic language translation gen- eration help needs badly. In MT Summit XI Workshop on Using Corpora for NLG: Keynote Address.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Minimum Bayes-risk decoding for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shankar", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kumar, Shankar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In NAACL 2004.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Shankar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kumar, Shankar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of ACL-IJCNLP 2009.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The ME-TEOR metric for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Machine Translation Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lavie, Alon and Michael J. Denkowski. 2009. The ME- TEOR metric for automatic evaluation of machine trans- lation. Machine Translation Journal.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Variational decoding for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhifei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, Zhifei, Jason Eisner, and Sanjeev Khudanpur. 2009. Variational decoding for statistical machine translation. In Proceedings of ACL-IJCNLP 2009.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Corpus support for machine translation at LDC", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoyi", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Cieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma, Xiaoyi and Christopher Cieri. 2006. Corpus support for machine translation at LDC. In LREC 2006.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment", |
|
"authors": [ |
|
{ |
|
"first": "Evgeny", |
|
"middle": [], |
|
"last": "Matusov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Ueffing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "11th Conference of the EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matusov, Evgeny, Nicola Ueffing, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses align- ment. In 11th Conference of the EACL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Language model adaptation for difficult-to-translate phrases", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Mohit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Liberato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Annual Conference of the EAMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit, B., F. Liberato, and R. Hwa. 2009. Language model adaptation for difficult-to-translate phrases. In Proceed- ings of the 13th Annual Conference of the EAMT.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Weighted finite-state transducers in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "CSL", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "69--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohri, Mehryar, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. In CSL, volume 16, pages 69-88.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Stochastic text generation", |
|
"authors": [ |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Oberlander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "In Philosophical Transactions of the Royal Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oberlander, Jon and Chris Brew. 2000. Stochastic text gen- eration. In Philosophical Transactions of the Royal Soci- ety.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A smorgasbord of features for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Eng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the HLT Conference of the NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Och, F., D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of the HLT Conference of the NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Consensus network decoding for statistical machine translation system combination", |
|
"authors": [ |
|
{ |
|
"first": "K.-C", |
|
"middle": [], |
|
"last": "Sim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sahbi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sim, K.-C., W. Byrne, M. Gales, H. Sahbi, and P.C. Wood- land. 2007. Consensus network decoding for statisti- cal machine translation system combination. In ICASSP 2007.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A study of translation edit rate with targeted human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of AMTA", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Snover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, , and John Makhoul. 2006. A study of trans- lation edit rate with targeted human annotation. In Pro- ceedings of AMTA.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Lattice minimum Bayes-risk decoding for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Tromble", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shankar", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tromble, Roy, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the 2008 Conference on EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Word-level confidence estimation for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Ueffing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "1", |
|
"pages": "9--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ueffing, Nicola and Hermann Ney. 2007. Word-level confi- dence estimation for machine translation. Computational Linguistics, 33(1):9-40.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Human evaluation of machine translation through binary system comparisons", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Vilar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Leusch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Banchs", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vilar, D, G Leusch, H Ney, and R Banchs. 2007. Human evaluation of machine translation through binary system comparisons. In Proceedings of WMT 2007.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "N -gram posterior probabilities for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zens, Richard and Hermann Ney. 2006. N -gram posterior probabilities for statistical machine translation. In Pro- ceedings of WMT 2006.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Average n-gram precisions (left) and counts (right) for 2075 sentences of NIST Arabic\u2192English ML translations at a range of posterior probability thresholds 0 \u2264 \u03b2 \u2264 1. The left plot shows at \u03b2 = 0 the n-gram precisions used in the BLEU score of the ML baseline system." |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "ML translation\u00ca, word lattice E, and decomposition as a sequence of four string and five sublattice regions H 1 . . . H 9 using n-gram posterior probability threshold p(u|E)\u22650.8." |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td/><td/><td>tune</td><td>test</td><td>nw08 ng08</td></tr><tr><td/><td>ML</td><td colspan=\"2\">54.2 53.8</td><td>51.3</td><td>36.3</td></tr><tr><td/><td colspan=\"3\">0.0 54.2 53.8</td><td>51.3</td><td>36.3</td></tr><tr><td/><td colspan=\"3\">0.2 54.3 53.8</td><td>51.3</td><td>36.3</td></tr><tr><td>\u03b2</td><td colspan=\"3\">0.4 54.6 54.2 0.6 54.9 54.4</td><td>51.6 52.1</td><td>36.7 36.6</td></tr><tr><td/><td colspan=\"3\">0.8 54.9 54.4</td><td>52.1</td><td>36.6</td></tr><tr><td/><td colspan=\"3\">1.0 54.9 54.4</td><td>52.2</td><td>36.7</td></tr><tr><td colspan=\"2\">LMBR</td><td colspan=\"2\">54.9 54.4</td><td>52.2</td><td>36.8</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "2 BLEU below unconstrained LMBR decoding. Done in this way, constraining hypotheses to have 5-grams from the GigaWord", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "BLEU scores for ML hypotheses and LMBR decoding in H over 0 \u2264 \u03b2 \u2264 1.", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Partial hypothesis fluency judgements.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |