diff --git "a/test.csv" "b/test.csv" --- "a/test.csv" +++ "b/test.csv" @@ -1,28 +1,28 @@ summary_id,paper_id,source_sid,target_sid,source_text,target_text,target_doc,strategy -C00-2123,C00-2123,8,194,The approach has been successfully tested on the 8 000-word Verbmobil task.,The approach has been successfully tested on the 8 000-word Verbmobil task.,"['Word Re-ordering and DP-based Search in Statistical Machine Translation', 'In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).', 'Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃ\x86cient search algorithm.', 'A search restriction especially useful for the translation direction from German to English is presented.', 'The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.', 'The goal of machine translation is the translation of a text given in some source language into a target language.', 'We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.', 'Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.', 'Our approach uses word-to-word dependencies between source and target words.', 'The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).', 'These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.', 'The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.', 'A simple extension will be used to handle this problem.', 'In Section 2, we brie y review our approach to statistical machine translation.', 'In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.', 'This approach is compared to another reordering scheme presented in (Berger et al., 1996).', 'In Section 4, we present the performance measures used and give translation results on the Verbmobil task.', 'In this section, we brie y review our translation approach.', 'In Eq.', '(1), Pr(eI 1) is the language model, which is a trigram language model in this case.', 'For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.', 'The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô\x80\x80\x801; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô\x80\x80\x801 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ).', 'When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.', 'In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.', '2.1 Inverted Alignments.', 'To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).', 'An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.', ""What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) max bI 1 I Yi=1 [p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô\x80\x80\x801 iô\x80\x80\x802) is the trigram language model probability."", 'The inverted alignment probability p(bijbiô\x80\x80\x801; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.', 'The details are given in (Och and Ney, 2000).', 'The sentence length probability p(JjI) is omitted without any loss in performance.', 'For the inverted alignment probability p(bijbiô\x80\x80\x801; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.', ""The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment."", 'We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.', 'The word joining is done on the basis of a likelihood criterion.', 'An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.', ""E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment."", 'In the following, we assume that this word joining has been carried out.', 'Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.', 'In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).', 'The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.', 'A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.', 'The resulting algorithm has a complexity of O(n!).', 'However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.', 'The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.', 'Subsets C of increasing cardinality c are processed.', 'The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.', 'For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.', 'This algorithm can be applied to statistical machine translation.', 'Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.', 'The advantage is that we can recombine search hypotheses by dynamic programming.', 'The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.', 'input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã\x86;e00 j02Cnfjg fp(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once.', 'Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.', 'For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).', 'e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.', 'Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.', 'The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô\x80\x80\x801 = e0.', 'The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either Ã\x86 = 0 or Ã\x86 = 1 new target words.', 'For Ã\x86 = 1, a new target language word is generated using the trigram language model p(eje0; e00).', 'For Ã\x86 = 0, no new target word is generated, while an additional source sentence position is covered.', 'A modified language model probability pÃ\x86(eje0; e00) is defined as follows: pÃ\x86(eje0; e00) = 1:0 if Ã\x86 = 0 p(eje0; e00) if Ã\x86 = 1 : We associate a distribution p(Ã\x86) with the two cases Ã\x86 = 0 and Ã\x86 = 1 and set p(Ã\x86 = 1) = 0:7.', 'The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.', 'In.', '2.', 'diesem 3.', 'Fall.', '4.', 'mein 5.', 'Kollege.', '6.', 'kann 7.nicht 8.', 'besuchen 9.', 'Sie.', '10.', 'am 11.', 'vierten 12.', 'Mai.', '13.', 'Figure 2: Order in which source positions are visited for the example given in Fig.1.', '= p(fj je) max Ã\x86;e00 j02Cnfjg np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).', 'The resulting algorithm is depicted in Table 1.', 'The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.', '3.1 Word ReOrdering with Verbgroup.', 'Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.', 'On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.', 'No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !', '(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !', '(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !', '(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô\x80\x80\x80 1g n fl1; l2; l3g ; l0) !', '(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.', 'In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.', 'Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.', 'A typical situation is shown in Figure 1.', ""When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence."", ""Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated."", 'The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.', 'To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.', 'Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.', 'Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.', 'While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.', 'The sequence of states needed to carry out the word reordering example in Fig.', '1 is given in Fig.', '2.', 'The 13 positions of the source sentence are processed in the order shown.', 'A position is presented by the word at that position.', 'Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) !', '(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.', 'To be short, we omit the target words e; e0 in the formulation of the search hypotheses.', 'There are 13 types of extensions needed to describe the verbgroup reordering.', 'The details are given in (Tillmann, 2000).', 'For each extension a new position is added to the coverage set.', 'Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $).', 'Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.', 'The search starts in the hypothesis (I; f;g; 0).', 'f;g denotes the empty set, where no source sentence position is covered.', 'The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Ã\x86;e00 np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).', 'f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ô\x80\x80\x80L; ; Jg.', 'The final score is obtained from: max e;e0 j2fJô\x80\x80\x80L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.', 'The complexity of the quasimonotone search is O(E3 J (R2+LR)).', 'The proof is given in (Tillmann, 2000).', '3.2 Reordering with IBM Style.', 'Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).', 'A detailed description of the search procedure used is given in this patent.', 'Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.', 'A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.', 'Here, we process only full-form words within the translation procedure.', 'the number of permutations carried out for the word reordering is given.', 'During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.', 'Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.', 'The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.', 'This number must be less than or equal to n ô\x80\x80\x80 1.', 'Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.', 'Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.', 'In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.', 'Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.', 'A dynamic programming recursion similar to the one in Eq. 2 is evaluated.', 'In this case, we have no finite-state restrictions for the search space.', 'The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.', 'This approach leads to a search procedure with complexity O(E3 J4).', 'The proof is given in (Tillmann, 2000).', '4.1 The Task and the Corpus.', 'We have tested the translation system on the Verbmobil task (Wahlster 1993).', 'The Verbmobil task is an appointment scheduling task.', 'Two subjects are each given a calendar and they are asked to schedule a meeting.', 'The translation direction is from German to English.', 'A summary of the corpus used in the experiments is given in Table 3.', 'The perplexity for the trigram language model used is 26:5.', 'Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.', 'Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.', 'For the experiments, we use a simple preprocessing step.', 'German city names are replaced by category markers.', 'The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.', 'Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).', 'German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.', 'Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.', 'The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.', 'On average, 6 reference translations per automatic translation are available.', 'The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.', 'This measure has the advantage of being completely automatic.', 'SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.', 'For the error counts, a range from 0:0 to 1:0 is used.', 'An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.', '4.3 Translation Experiments.', 'For the translation experiments, Eq. 2 is recursively evaluated.', 'We apply a beam search concept as in speech recognition.', 'However there is no global pruning.', 'Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.', 'Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited.', 'For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃ\x86cient to consider only the best 50 words.', 'We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.', 'Table 4 shows translation results for the three approaches.', 'The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).', 'Here, the pruning threshold t0 = 10:0 is used.', 'Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).', 'The monotone search performs worst in terms of both error rates mWER and SSER.', 'The computing time is low, since no reordering is carried out.', 'The quasi-monotone search performs best in terms of both error rates mWER and SSER.', 'Additionally, it works about 3 times as fast as the IBM style search.', 'For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.', 'The effect of the pruning threshold t0 is shown in Table 5.', 'The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.', 'The negative logarithm of t0 is reported.', 'The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.', 'Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.', 'Decreasing the threshold results in higher mWER due to additional search errors.', 'Table 5: Effect of the beam threshold on the number of search errors (147 sentences).', 'Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.', 'Again, the monotone search performs worst.', 'In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.', ""The German finite verbs 'bin' (second example) and 'k\x7fonnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions)."", 'In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.', 'In this paper, we have presented a new, eÃ\x86cient DP-based search procedure for statistical machine translation.', 'The approach assumes that the word reordering is restricted to a few positions in the source sentence.', 'The approach has been successfully tested on the 8 000-word Verbmobil task.', 'Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.', '2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.', '3) A tight coupling with the speech recognizer output.', 'This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.', 'Table 6: Example Translations for the Verbmobil task.', 'Input: Ja , wunderbar . K\x7fonnen wir machen . MonS: Yes, wonderful.', 'Can we do . QmS: Yes, wonderful.', 'We can do that . IbmS: Yes, wonderful.', 'We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie w\x7fare es denn am \x7fahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about \x7fahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten k\x7fonnten , w\x7fare das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .']",extractive -P11-1061_swastika,P11-1061,5,158,They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.,We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.,"['Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections', 'We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.', 'Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.', 'We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).', 'Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.', 'Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.', 'Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).', 'However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.', 'Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.', 'Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best.', 'To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.', 'This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).', 'Naseem et al. (2009) and Snyder et al.', '(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.', 'Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.', 'First, we use a novel graph-based framework for projecting syntactic information across language boundaries.', 'To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4).', 'Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.', 'To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).', 'Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.', 'Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.', 'These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.', 'We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements.', 'Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).', 'The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.', 'Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.', 'As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.', 'Graph construction does not require any labeled data, but makes use of two similarity functions.', 'The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).', 'To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.', 'To initialize the graph we tag the English side of the parallel text using a supervised model.', 'By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices.', 'Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).', 'The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).', 'The following three sections elaborate these different stages is more detail.', 'In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003).', 'Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.', 'Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.', 'More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.', 'They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger.', 'We extend Subramanya et al.’s intuitions to our bilingual setup.', 'Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language.', 'The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).', 'On the English side, however, the vertices (denoted by Ve) correspond to word types.', 'Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams.', 'Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training.', 'We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.', 'Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).', 'We briefly review it here for completeness.', 'We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1.', 'Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.', 'For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.', 'This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.', 'Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.', 'Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.', 'We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.', 'To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.', 'Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.', 'Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts.', 'So far the graph has been completely unlabeled.', 'To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.', 'These tag distributions are used to initialize the label distributions over the English vertices in the graph.', 'Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.', 'A very small excerpt from an Italian-English graph is shown in Figure 1.', 'As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.', 'In this particular case, all English vertices are labeled as nouns by the supervised tagger.', 'In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.', 'It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.', 'In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.', 'Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.', 'Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.', 'We use label propagation in two stages to generate soft labels on all the vertices in the graph.', 'In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.', 'Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.', 'This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.', 'We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.', 'It can be shown that this objective is convex in q.', 'The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar.', 'The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).', 'If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.', 'While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.', 'Instead, we resort to an iterative update based method.', 'We formulate the update as follows: where ∀ui ∈ Vf \\ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.', 'We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.', 'This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.', 'We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).', 'For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.', 'This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.', 'In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials.', 'The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.', 'All features were conjoined with the state z.', 'We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.', 'To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).', 'For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.', 'We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model.', 'This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.', 'The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.', 'This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.', '7).', 'It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.', 'However, we do not explore this possibility in the current work.', 'Before presenting our results, we describe the datasets that we used, as well as two baselines.', 'We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side.', 'The availability of these resources guided our selection of foreign languages.', 'For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).', 'The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).', 'Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.', 'Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.', 'However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.', 'We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.', 'We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.', 'We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).', 'While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.', 'For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.', 'The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.', 'The taggers were trained on datasets labeled with the universal tags.', 'The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.', 'In other words, the set of hidden states F was chosen to be the fine set of treebank tags.', 'Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.', 'To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.', 'We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.', 'For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.', 'For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM.', 'This can be seen as a rough approximation of Yarowsky and Ngai (2001).', 'We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq.', '1).', 'Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.', 'Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.', 'Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.', 'Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages.', 'We used C = 1.0 as the L2 regularization constant in (Eq.', '10) and trained both EM and L-BFGS for 1000 iterations.', 'When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.', '7).', 'Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3.', 'For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.', 'For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.', 'The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.', 'Table 2 shows our complete set of results.', 'As expected, the vanilla HMM trained with EM performs the worst.', 'The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).', 'Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.', 'The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.', 'Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.', 'For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.', 'Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages.', 'It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.', 'As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages.', 'Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features.', 'We tabulate this increase in Table 3.', 'For all languages, the vocabulary sizes increase by several thousand words.', 'Although the tag distributions of the foreign words (Eq.', '6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.', 'Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.', 'While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example.', 'Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.', 'As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.', 'As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.', 'We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.', 'Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.', 'Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.', 'Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.', 'We would like to thank Ryan McDonald for numerous discussions on this topic.', 'We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.', 'Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.']",extractive -J96-3004,J96-3004,7,399,"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","['A Stochastic Finite-State Word-Segmentation Algorithm for Chinese', 'The initial stage of text analysis for any NLP task usually involves the tokenization of the input into words.', ' For languages like English one can assume, to a first approximation, that word boundaries are given by whitespace or punctuation.', ' In various Asian languages, including Chinese, on the other hand, whitespace is never used to delimit words, so one must resort to lexical information to ""reconstruct"" the word-boundary information.', ' In this paper we present a stochastic finite-state model wherein the basic workhorse is the weighted finite-state transducer.', ' The model segments Chinese text into dictionary entries and words derived by various productive lexical processes, and--since the primary intended application of this model is to text-to-speech synthesis--provides pronunciations for these words.', ' We evaluate the system\'s performance by comparing its segmentation \'Tudgments"" with the judgments of a pool of human segmenters, and the system is shown to perform quite well.', 'Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS).', 'An initial step of any textÂ\xad analysis task is the tokenization of the input into words.', 'For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation.', ""Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces."", ""A moment's reflection will reveal that things are not quite that simple."", ""There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am."", 'If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.', ""And if one is interested in TIS, one would probably consider the single orthographic word ACL to consist of three phonological words-lei s'i d/-corresponding to the pronunciation of each of the letters in the acronym."", 'Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.', 'Email: rlls@bell-labs.', 'com t 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.', 'Email: cls@bell-labs.', 'com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.', 'Email: gale@research.', 'att.', ""com §Cambridge, UK Email: nc201@eng.cam.ac.uk © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries."", 'In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.', 'orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into words.', 'Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion ""orthographic word"" is not universal.', 'Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writÂ\xad ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.', 'In Chinese text, individual characters of the script, to which we shall refer by their traditional name of hanzi,Z are written one after another with no intervening spaces; a Chinese sentence is shown in Figure 1.3 Partly as a result of this, the notion ""word"" has never played a role in Chinese philological tradition, and the idea that Chinese lacks anyÂ\xad thing analogous to words in European languages has been prevalent among Western sinologists; see DeFrancis (1984).', 'Twentieth-century linguistic work on Chinese (Chao 1968; Li and Thompson 1981; Tang 1988,1989, inter alia) has revealed the incorrectness of this traditional view.', 'All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia..', ""2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji.."", '3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.', 'immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones.', 'Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose.', 'In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest.', ""raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a diÂ\xad graphernic word."", ""The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'"", 'Arguably this consists of about three phonological words.', 'On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.', 'Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.', 'For example, suppose one is building a ITS system for Mandarin Chinese.', 'For that application, at a minimum, one would want to know the phonological word boundaries.', 'Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.', 'However, there are several reasons why this approach will not in general work: 1.', 'Many hanzi have more than one pronunciation, where the correct.', ""pronunciation depends upon word affiliation: tfJ is pronounced deO when it is a prenominal modification marker, but di4 in the word §tfJ mu4di4 'goal'; fl; is normally ganl 'dry,' but qian2 in a person's given name."", 'including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: \'j"";gil, xiao3 [lao3 shu3] \'little rat,\' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 \'rat,\' blocking its phrasal application.', '3.', 'In various dialects of Mandarin certain phonetic rules apply at the word.', 'level.', ""For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO."", 'The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own.', '4.', 'TIS systems in general need to do more than simply compute the.', 'pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances.', 'It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks.', 'Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information.', 'Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation.', ""The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval."", 'There are thus some very good reasons why segmentation into words is an important task.', 'A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented.', 'For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.', 'Among these are words derived by various productive processes, including: 1.', 'Morphologically derived words such as, xue2shengl+men0.', ""(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl."", '2.', ""Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'"", 'Of course, we.', ""can expect famous names like Zhou Enlai's to be in many dictionaries, but names such as :fi lf;f; shi2jil-lin2, the name of the second author of this paper, will not be found in any dictionary."", ""'Malaysia.'"", ""Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found."", 'In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes.', ""The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities)."", 'The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.', 'It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen conÂ\xad structions, including morphological derivatives and personal names.', 'We will evaluate various specific aspects of the segmentation, as well as the overall segmentation perÂ\xad formance.', 'This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.', 'Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.', '2.', 'A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.', 'The first point we need to address is what type of linguistic object a hanzi repreÂ\xad sents.', 'Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.', 'The most accurate characterization of Chinese writing is that it is morphosyllabic (DeFrancis 1984): each hanzi represents one morpheme lexically and semantically, and one syllable phonologiÂ\xad cally.', ""Thus in a two-hanzi word like lflli?J zhong1guo2 (middle country) 'China' there are two syllables, and at the same time two morphemes."", ""Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.'"", 'A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found.', 'Finally, quite a few hanzi are homographs, meaning that they may be pronounced in several different ways, and in extreme cases apparently represent different morphemes: The prenominal modifiÂ\xad cation marker eg deO is presumably a different morpheme from the second morpheme of §eg mu4di4, even though they are written the same way.4 The second point, which will be relevant in the discussion of personal names in Section 4.4, relates to the internal structure of hanzi.', ""Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic')."", ""For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'"", ""Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'"", 'While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.', 'As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties.', '3.', 'Previous Work.', 'There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).', 'Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexiÂ\xad cal rule-based approaches, and approaches that combine lexical information with staÂ\xad tistical information.', 'The present proposal falls into the last group.', 'Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach.', 'In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.', 'Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.', 'A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.', '4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme.', 'For.', ""example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2."", 'This is orthographically represented as 7C.', ""so that 'door' would be and in this case the hanzi 7C, does not represent a syllable."", ""Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme."", '(See Sproat and Shih 1995.)', 'However, the characterization given in the main body of the text is correct sufficiently often to be useful.', 'Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.', 'Nonstochastic lexical-knowledge-based approaches have been much more numerÂ\xad ous.', 'Two issues distinguish the various proposals.', 'The first concerns how to deal with ambiguities in segmentation.', 'The second concerns the methods used (if any) to exÂ\xad tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based.', 'The most popular approach to dealing with segÂ\xad mentation ambiguities is the maximum matching method, possibly augmented with further heuristics.', 'This method, one instance of which we term the ""greedy algorithm"" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (beginÂ\xad ning) of the sentence is reached.', 'Papers that use this method or minor variants thereof include Liang (1986), Li et al.', '(1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994).', 'The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.', 'Methods that allow multiple segmentations must provide criteria for choosing the best segmentation.', 'Some approaches depend upon some form of conÂ\xad straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).', 'Others depend upon various lexical heurisÂ\xad tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.', 'Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994).', 'Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or costÂ\xad based scoring mechanism.', 'Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process.', 'The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993).', 'More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}.', 'Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.', 'Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.', 'Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular seÂ\xad quence of hanzi might be a name, but that it is likely to be a name with some probabilÂ\xad ity.', 'Several systems propose statistical methods for handling unknown words (Chang et al. 1992; Lin, Chiang, and Su 1993; Peng and Chang 1993).', 'Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not acÂ\xad tually tag the words as belonging to one or another class of expression.', 'This is not ideal for some applications, however.', 'For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowlÂ\xad edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1.', 'Following Sproat and Shih (1990), performance for Chinese segmentation systems is generally reported in terms of the dual measures of precision and recalP It is fairly standard to report precision and recall scores in the mid to high 90% range.', 'However, it is almost universally the case that no clear definition of what constitutes a ""correct"" segmentation is given, so these performance measures are hard to evaluate.', 'Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.', 'In a few cases, the criteria for correctness are made more explicit.', 'For example Chen and Liu (1992) report precision and recall rates of over 99%, but this counts only the words that occur in the test corpus that also occur in their dictionary.', 'Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult.', 'The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words.', 'The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation.', 'Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented.', 'Chinese word segmentation can be viewed as a stochastic transduction problem.', 'More formally, we start by representing the dictionary D as a Weighted Finite State TransÂ\xad ducer (WFST) (Pereira, Riley, and Sproat 1994).', 'Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.', 'Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.', 'Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).', 'We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.', 'selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.', 'then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.', 'In this example there are four ""input characters,"" A, B, C and D, and these map respectively to four ""pronunciations"" a, b, c and d. Furthermore, there are four ""words"" represented in the dictionary.', 'These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5.', '0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).', 'An input ABCD can be represented as an FSA as shown in Figure 2(b).', 'This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d).', 'This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.', 'Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model.', 'It is important to bear in mind, though, that this is not an inherent limitation of the model.', 'For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen bigrams in such a scheme would typically be modeled with a special backoff state.)', 'In Section 6 we disÂ\xad cuss other issues relating to how higher-order language models could be incorporated into the model.', '4.1 Dictionary Representation.', 'As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.', 'Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.', 'The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.', 'Note that hanzi that are not grouped into dictionary words (and are not identified as singleÂ\xad hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words.', 'Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm.', '7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.', 'It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China.', '(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.', 'The transitive closure of the dictionary in (a) is composed with Id(input) (b) to form the WFST (c).', 'The segmentation chosen is the best path through the WFST, shown in (d).', '(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).', 'We have not to date explored these various options.', 'Word frequencies are estimated by a re-estimation procedure that involves applyÂ\xad ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.', 'newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material.', 'This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used.', 'The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.', 'Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in particÂ\xad ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.', 'In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates.', 'Note also that the costs currently used in the system are actually string costs, rather than word costs.', ""This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\\xiao3jiang4 'little general.'"", 'In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants.', 'In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.', 'The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.', ""Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:tÂ¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and iÂ¥inl."", ""nan2gual 'pumpkin.'"", ""4.2 A Sample Segmentation Using Only Dictionary Words Figure 4 shows two possible paths from the lattice of possible analyses of the input sentence B X:Â¥ .:.S:P:l 'How do you say octopus in Japanese?' previously shown in Figure 1."", ""As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.'"", ""As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!."", ""yu2 'fish.'"", 'Both of these analyses are shown in Figure 4; fortunately, the correct analysis is also the one with the lowest cost, so it is this analysis that is chosen.', '4.3 Morphological Analysis.', 'The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary.', 'One class comprises words derived by productive morphologiÂ\xad cal processes, such as plural noun formation using the suffix ir, menD.', '(Other classes handled by the current system are discussed in Section 5.)', 'The morphological analÂ\xadysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up.', 'each word in the lexicon whether or not each string is actually an instance of the word in question.', '£ : _ADV: 5.88 If:!', "":zhong1 : 0.0 tjl :huo2 :0.0 (R:spub:/ic of Ch:ina) + .,_,...I : jlong4 :0.0 (mUifaty genG181) 0 £: _NC: 40.0 Figure 3 Partial Chinese Lexicon (NC = noun; NP = proper noun).c=- - I â\x80¢=- :il: .;ss:;zhangt â\x80¢ '-:."", 'I â\x80¢ JAPANS :rl4 .·········""\\)··········""o·\'·······""\\:J········· ·········\'\\; . \'.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t \'- â\x80¢ :zhang!', '!!:\\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 â\x80¢ 10.03 13...', '7.96 5.55 1 l...................................................................................................................................................................................................J..', ""Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'."", 'A non-optimal analysis is shown with dotted lines in the bottom frame.', 'ogy (Koskenniemi 1983; Antworth 1990; Tzoukermann and Liberman 1990; Karttunen, Kaplan, and Zaenen 1992; Sproat 1992); we represent the fact that ir, attaches to nouns by allowing t:-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing f,.', 'However, for our purposes it is not sufficient to repreÂ\xad sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.', 'For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry.', ""So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02."", ""But we also need an estimate of the probability for a non-occurring though possible plural form like iÂ¥JJ1l.f, nan2gua1-men0 'pumpkins.'"", '10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once.', 'Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be deÂ\xad noted unseen(f,).', 'For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.', 'This GoodÂ\xad Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.', 'Finally, asÂ\xad suming a simple bigram backoff model, we can derive the probability estimate for the particular unseen word iÂ¥1J1l.', 'irL as the product of the probability estimate for iÂ¥JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(iÂ¥1J1l.ir,) p(iÂ¥1J1l.)p(unseen(f,)).', 'The cost estimate, cost(iÂ¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of iÂ¥JJ1l.', 'and f,.', 'Figure 5 shows how this model is implemented as part of the dictionary WFST.', 'There is a (costless) transition between the NC node and f,.', 'The transition from f, to a final state transduces c to the grammatical tag PL with cost cost(unseen(f,)): cost(iÂ¥JJ1l.ir,) == cost(iÂ¥JJ1l.)', '+ cost(unseen(fm, as desired.', ""For the seen word ir, 'genÂ\xad erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,)."", 'This representation gives ir, an appropriate morphological decomposition, preÂ\xad serving information that would be lost by simply listing ir, as an unanalyzed form.', 'Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.', 'An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.', 'This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.', '10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.', 'attaching to terms denoting human beings.', ""However, it is possible to personify any noun, so in children's stories or fables, iÂ¥JJ1l."", ""f, nan2gual+men0 'pumpkins' is by no means impossible."", 'J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) £: _ADV: 5.98 Â¥ :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,.', '0 Figure 5 An example of affixation: the plural affix.', '4.4 Chinese Personal Names.', 'Full Chinese personal names are in one respect simple: they are always of the form family+given.', 'The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.', 'Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1.', 'wo rd => na m e 2.', 'na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.', 'na me =>1 ha nzi fa mi ly 1 ha nzi gi ve n 4.', 'na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5.', 'na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others.', 'For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.', 'We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules.', 'For example, given a sequence F1G1G2, where F1 is a legal single-hanzi family name, and Plural Nouns X g 0 g ""\' X X 0 T!i c""\'.', '0 X u} ""\' o; .2 X X>O!KXX XI<>< »C X X XX :X: X X ""\' X X XX >OODIIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R""2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.', 'G1 and G2 are hanzi, we can estimate the probability of the sequence being a name as the product of: â\x80¢ the probability that a word chosen randomly from a text will be a name-p(rule 1), and â\x80¢ the probability that the name is of the form 1hanzi-family 2hanzi-given-p(rule 2), and â\x80¢ the probability that the family name is the particular hanzi F1-p(rule 6), and â\x80¢ the probability that the given name consists of the particular hanzi G1 and G2-p(rule 9) This model is essentially the one proposed in Chang et al.', '(1992).', ""The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on."", 'This model is easily incorporated into the segmenter by building a WFST restrictÂ\xad ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name.', ""This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed; see Pereira, Riley, and Sproat (1994) for an explanation of the notion of summing WFSTs.12 Conceptual Improvements over Chang et al.'s Model."", ""There are two weaknesses in Chang et al.'s model, which we improve upon."", 'First, the model assumes independence between the first and second hanzi of a double given name.', ""Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model."", 'As a partial solution, for pairs of hanzi that co-occur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost.', 'The second weakness is purely conceptual, and probably does not affect the perÂ\xad formance of the model.', 'For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acciÂ\xad dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation.', 'As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical.', 'Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.', 'Other good classes include JADE and GOLD; other bad classes are DEATH and RAT.', 'We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class.', ""Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls."", 'The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.', 'In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of.', 'hanzi in the various name positions, derived from a million names.', ""12 One class of full personal names that this characterization does not cover are married women's names."", ""where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu."", 'This style of naming is never required and seems to be losing currency.', 'It is formally straightforward to extend the grammar to include these names, though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of name.', 'We of course also fail to identify, by the methods just described, given names used without their associated family name.', 'This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name.', 'Table 1 The cost as a novel given name (second position) for hanzi from various radical classes.', 'JA DE G O L D G R AS S SI C K NE SS DE AT H R A T 14.', '98 15.', '52 15.', '76 16.', '25 16.', '30 16.', '42 nator, the N31s can be measured well by counting, and we replace the expectation by the observation.', 'In the numerator, however, the counts of ni1s are quite irregular, inÂ\xad cluding several zeros (e.g., RAT, none of whose members were seen).', 'However, there is a strong relationship between ni1s and the number of hanzi in the class.', 'For E(ni1s), then, we substitute a smooth S against the number of class elements.', 'This smooth guarantees that there are no zeroes estimated.', 'The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN.', 'This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.', 'Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired, so the trend observed for the results of this method is in the right direction.', '4.5 Transliterations of Foreign Words.', 'Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name.', 'Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identiÂ\xad fication of such names is tricky.', ""Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal ChiÂ\xad nese personal name, retains a foreign flavor because of liM."", 'As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabilÂ\xad ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate.', 'As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).', 'Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.', 'For instance, the common ""suffixes,"" -nia (e.g.,.', 'Virginia) and -sia are normally transliterated as fbSi!', 'ni2ya3 and @5:2 xilya3, respectively.', 'The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied.', 'logical rules, and personal names; the transitive closure of the resulting machine is then computed.', 'In this section we present a partial evaluation of the current system, in three parts.', ""The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis."", 'To date we have not done a separate evaluation of foreign-name recognition.', 'Evaluation of the Segmentation as a Whole.', 'Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair.', 'The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.', 'Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects.', 'To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.)', 'We asked six native speakers-three from Taiwan (TlT3), and three from the Mainland (M1M3)-to segment the corpus.', 'Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud.', ""An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units."", '(See also Wu and Fung [1994].)', 'Various segmentation approaches were then compared with human performance: 1.', 'A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point.', '2.', 'An anti-greedy algorithm, AG: instead of the longest match, take the.', 'shortest match at each point.', '3.', 'The method being described-henceforth ST..', 'Two measures that can be used to compare judgments are: 1.', 'Precision.', 'For each pair of judges consider one judge as the standard,.', ""computing the precision of the other's judgments relative to this standard."", '2.', 'Recall.', 'For each pair of judges, consider one judge as the standard,.', ""computing the recall of the other's judgments relative to this standard."", 'Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn.', 'from the subset of the United Informatics corpus not used in the training of the models.', 'Table 2 Similarity matrix for segmentation judgments.', 'Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.', 'We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.', 'Table 2 shows these similarity measures.', 'The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that disÂ\xad tance matrix, and plotting the first two most significant dimensions.', 'The result of this is shown in Figure 7.', 'The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation.', 'In addition to the automatic methods, AG, GR, and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words or names).', 'This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based.', 'As can be seen, GR and this ""pared-down"" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.', 'It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third TaiÂ\xad wan speaker is also close in the most significant dimension (the x axis).', 'Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers.', 'The breakdown of the different types of words found by ST in the test corpus is given in Table 3.', 'Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%..', '16 As one reviewer points out, one problem with the unigram model chosen here is that there is still a. tendency to pick a segmentation containing fewer words.', 'That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case.', 'In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical.', 'As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.', 'The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.', 'This is an issue that we have not addressed at the current stage of our research.', 'i..f,..', '""c\' 0 + 0 ""0 \' â\x80¢ + a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y â\x80¢ Taiwan 0 ·;; 0 c CD E i5 0""\' 9 9 â\x80¢ Mainland â\x80¢ â\x80¢ â\x80¢ â\x80¢ -0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions.', 'The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question.', 'Table 3 Classes of words found by ST for the test corpus.', 'Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.', 'Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words.', 'It may seem surprising to some readers that the interhuman agreement scores reported here are so low.', 'However, this result is consistent with the results of exÂ\xad periments discussed in Wu and Fung (1994).', 'Wu and Fung introduce an evaluation method they call nk-blind.', 'Under this scheme, n human judges are asked independently to segment a text.', 'Their results are then compared with the results of an automatic segmenter.', 'For a given ""word"" in the automatic segmentation, if at least k of the huÂ\xad man judges agree that this is a word, then that word is considered to be correct.', 'For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.', 'Proper-Name Identification.', 'To evaluate proper-name identification, we randomly seÂ\xad lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sinÂ\xad gle unambiguous answer, unlike the more general question of which segmentation is correct.', 'The performance was 80.99% recall and 61.83% precision.', 'Interestingly, Chang et al. report 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits.', ""However, we have reason to doubt Chang et al.'s performance claims."", 'Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system.', 'The performance of our system on those sentences apÂ\xad peared rather better than theirs.', 'On a set of 11 sentence fragments-the A set-where they reported 100% recall and precision for name identification, we had 73% recall and 80% precision.', 'However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.', 'On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.', 'Note that it is in precision that our overÂ\xad all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise.', 'Thus we have some confidence that our own performance is at least as good as that of Chang et al.', '(1992).', ""In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system."", 'Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang.', 'Examples are given in Table 4.', 'In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repreÂ\xad senting differences in the capabilities of the model per se.', 'The first issue relates to the completeness of the base lexicon.', ""The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI!"", ""tai2du2 'Taiwan Independence.'"", 'This is a rather important source of errors in name identifiÂ\xad cation, and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used.', ""17 They also provide a set of title-driven rules to identify names when they occur before titles such as $t. 1: xianlshengl 'Mr.' or i:l:itr!J tai2bei3 shi4zhang3 'Taipei Mayor.'"", 'Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name.', 'Our system does not currently make use of titles, but it would be straightforward to do so within the finite-state framework that we propose.', '18 We are grateful to ChaoHuang Chang for providing us with this set.', ""Note that Wang, Li, and Chang's."", 'set was based on an earlier version of the Chang et a!.', 'paper, and is missing 6 examples from the A set.', ""19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words."", 'constitute names, since we have only their segmentation, not the actual classification of the segmented words.', 'Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical.', 'Table 4 Differences in performance between our system and Wang, Li, and Chang (1992).', 'Our System Wang, Li, and Chang a. 1\\!f!IP Eflltii /1\\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. ""*:t: w _t ff 1 ""* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 \'music by Chen Zhongshen \' huang2rong2 youlyoul de dao4 \'Huang Rong said soberly\' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 \'after the county president You Qing had assumed the position\' lin2 quan2 \'Lin Quan\' wang2jian4 \'Wang Jian\' oulyang2-ke4 \'Ouyang Ke\' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 \'because it cannot permit Taiwan Independence so\' silfa3-yuan4zhang3 lin2yang2-gang3 \'president of the Judicial Yuan, Lin Yanggang\' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol \'Lin Zhanghu will give an exÂ\xad planation live\' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 \'in two years the distributed money will stop\' gaoltangl da4chi2 ye1zi0 fen3 \'chicken stock, a tablespoon of coconut flakes\' you2qingl ru4zhu3 xian4fu3 lwu4 \'after You Qing headed the county government\' Table 5 Performance on morphological analysis.', 'Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.', ""For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.'"", 'Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.', 'Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.', 'An example is in (i), where the system fails to group t;,f;?""$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 \'wood\';?""$ yang2 \'ocean\'; ?t!; gang3 \'harbor\').', 'In many cases these failures in recall would be fixed by having better estimates of the actual probÂ\xad abilities of single-hanzi words, since our estimates are often inflated.', ""A totally nonÂ\xad stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended."", 'Evaluation of Morphological Analysis.', 'In Table 5 we present results from small test corÂ\xad pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.', ""The first four affixes are so-called resultative affixes: they denote some propÂ\xad erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'"", 'The last affix in the list is the nominal plural f, men0.20 In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure).', 'In this paper we have argued that Chinese word segmentation can be modeled efÂ\xad fectively using weighted finite-state transducers.', 'This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.', 'Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3..', 'handled given appropriate models.', '(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)', 'We have argued that the proposed method performs well.', 'However, some caveats are in order in comparing this method (or any method) with other approaches to segÂ\xad mentation reported in the literature.', 'First of all, most previous articles report perforÂ\xad mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.', 'What both of these approaches presume is that there is a sinÂ\xad gle correct segmentation for a sentence, against which an automatic algorithm can be compared.', 'We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.', 'This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.', 'May 1995).', 'However, until such standards are universally adopted in evaluating Chinese segmenters, claims about performance in terms of simple measures like percent correct should be taken with a grain of salt; see, again, Wu and Fung (1994) for further arguments supporting this conclusion.', 'Second, comparisons of different methods are not meaningful unless one can evalÂ\xad uate them on the same corpus.', 'Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.', 'One hopes that such a corpus will be forthÂ\xad coming.', 'Finally, we wish to reiterate an important point.', 'The major problem for our segÂ\xad menter, as for all segmenters, remains the problem of unknown words (see Fung and Wu [1994]).', 'We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.', 'However, there will remain a large number of words that are not readily adduced to any producÂ\xad tive pattern and that would simply have to be added to the dictionary.', 'This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.', 'The method reported in this paper makes use solely of unigram probabilities, and is therefore a zeroeth-order model: the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentation.', 'However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.', 'For example, as Gan (1994) has noted, one can construct examples where the segmenÂ\xad tation is locally ambiguous but can be determined on the basis of sentential or even discourse context.', ""Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx."", 'lla/llb and 14a/14b respectively).', 'In (1) the sequencema3lu4 cannot be resolved locally, but depends instead upon broader context; similarly in (2), the sequence :::tcai2neng2 cannot be resolved locally: 1.', ""(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2."", ""(a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'"", ""(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively."", ""Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases)."", 'An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL).', 'A high-level relation is agent, which relates an animate nominal to a predicate.', 'Particular instances of relations are associated with goodness scores.', 'Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are ""popular"" or not.', ""While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable."", 'Another question that remains unanswered is to what extent the linguistic information he considers can be handled-or at least approximated-by finite-state language models, and therefore could be directly interfaced with the segmentation model that we have presented in this paper.', 'For the examples given in (1) and (2) this certainly seems possible.', 'Consider first the examples in (2).', ""The segmenter will give both analyses :1 cai2 neng2 'just be able,' and ?]cai2neng2 'talent,' but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping them."", ""In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'"", 'Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993).', ""Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.'"", ""However, there is again local grammatical information that should favor the split in the case of (1a): both .ma3 'horse' and .ma3 lu4 are nouns, but only .ma3 is consistent with the classifier pil, the classifier for horses.21 By a similar argument, the preference for not splitting , lm could be strengthened in (lb) by the observation that the classifier 1'1* tiao2 is consistent with long or winding objects like , lm ma3lu4 'road' but not with,ma3 'horse.'"", 'Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules.', ""Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework."", 'With regard to purely morphological phenomena, certain processes are not hanÂ\xad dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.', 'Mandarin exhibits several such processes, including A-not-A question formation, ilÂ\xad lustrated in (3a), and adverbial reduplication, illustrated in (3b): 3.', ""(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'"", 'JI!', ""gaolxing4 'happy' => F.i'JF.i'J Jl!"", ""gaolbu4-gaolxing4 (hap-not-happy) 'happy?'"", ""(b) F.i'JJI!"", ""gaolxing4 'happy'=> F.i'JF.i'JJI!JI!"", ""gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb."", 'In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB.', 'The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.', 'Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages.', 'The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way.', 'The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.', 'As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis.', 'Furthermore, by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences, one can apply the segmenter to other problems, such as speech recognition (Pereira, Riley, and Sproat 1994).', 'Since the transducers are built from human-readable descriptions using a lexical toolkit (Sproat 1995), the system is easily maintained and extended.', 'While size of the resulting transducers may seem daunting-the segmenter described here, as it is used in the Bell Labs Mandarin TTS system has about 32,000 states and 209,000 arcs-recent work on minimization of weighted machines and transducers (cf.', '21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by.', 'a classifier.', 'The particular classifier used depends upon the noun.', 'Mohri [1995]) shows promise for improving this situation.', 'The model described here thus demonstrates great potential for use in widespread applications.', 'This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research.', ""We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'"", 'We further thank Dr. J.-S.', 'Chang of Tsinghua University, Taiwan, R.O.C., for kindly providing us with the name corpora.', 'We also thank ChaoHuang Chang, reviewers for the 1994 ACL conference, and four anonymous reviewers for Computational Linguistics for useful comments.']",extractive +C00-2123,C00-2123,6,39,"In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.","In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).","['Word Re-ordering and DP-based Search in Statistical Machine Translation', 'In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).', 'Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃ\x86cient search algorithm.', 'A search restriction especially useful for the translation direction from German to English is presented.', 'The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.', 'The goal of machine translation is the translation of a text given in some source language into a target language.', 'We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.', 'Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.', 'Our approach uses word-to-word dependencies between source and target words.', 'The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).', 'These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.', 'The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.', 'A simple extension will be used to handle this problem.', 'In Section 2, we brie y review our approach to statistical machine translation.', 'In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.', 'This approach is compared to another reordering scheme presented in (Berger et al., 1996).', 'In Section 4, we present the performance measures used and give translation results on the Verbmobil task.', 'In this section, we brie y review our translation approach.', 'In Eq.', '(1), Pr(eI 1) is the language model, which is a trigram language model in this case.', 'For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.', 'The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô\x80\x80\x801; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô\x80\x80\x801 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ).', 'When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.', 'In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.', '2.1 Inverted Alignments.', 'To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).', 'An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.', ""What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) max bI 1 I Yi=1 [p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô\x80\x80\x801 iô\x80\x80\x802) is the trigram language model probability."", 'The inverted alignment probability p(bijbiô\x80\x80\x801; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.', 'The details are given in (Och and Ney, 2000).', 'The sentence length probability p(JjI) is omitted without any loss in performance.', 'For the inverted alignment probability p(bijbiô\x80\x80\x801; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.', ""The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment."", 'We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.', 'The word joining is done on the basis of a likelihood criterion.', 'An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.', ""E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment."", 'In the following, we assume that this word joining has been carried out.', 'Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.', 'In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).', 'The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.', 'A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.', 'The resulting algorithm has a complexity of O(n!).', 'However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.', 'The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.', 'Subsets C of increasing cardinality c are processed.', 'The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.', 'For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.', 'This algorithm can be applied to statistical machine translation.', 'Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.', 'The advantage is that we can recombine search hypotheses by dynamic programming.', 'The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.', 'input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã\x86;e00 j02Cnfjg fp(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once.', 'Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.', 'For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).', 'e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.', 'Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.', 'The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô\x80\x80\x801 = e0.', 'The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either Ã\x86 = 0 or Ã\x86 = 1 new target words.', 'For Ã\x86 = 1, a new target language word is generated using the trigram language model p(eje0; e00).', 'For Ã\x86 = 0, no new target word is generated, while an additional source sentence position is covered.', 'A modified language model probability pÃ\x86(eje0; e00) is defined as follows: pÃ\x86(eje0; e00) = 1:0 if Ã\x86 = 0 p(eje0; e00) if Ã\x86 = 1 : We associate a distribution p(Ã\x86) with the two cases Ã\x86 = 0 and Ã\x86 = 1 and set p(Ã\x86 = 1) = 0:7.', 'The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.', 'In.', '2.', 'diesem 3.', 'Fall.', '4.', 'mein 5.', 'Kollege.', '6.', 'kann 7.nicht 8.', 'besuchen 9.', 'Sie.', '10.', 'am 11.', 'vierten 12.', 'Mai.', '13.', 'Figure 2: Order in which source positions are visited for the example given in Fig.1.', '= p(fj je) max Ã\x86;e00 j02Cnfjg np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).', 'The resulting algorithm is depicted in Table 1.', 'The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.', '3.1 Word ReOrdering with Verbgroup.', 'Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.', 'On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.', 'No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !', '(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !', '(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !', '(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô\x80\x80\x80 1g n fl1; l2; l3g ; l0) !', '(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.', 'In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.', 'Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.', 'A typical situation is shown in Figure 1.', ""When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence."", ""Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated."", 'The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.', 'To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.', 'Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.', 'Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.', 'While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.', 'The sequence of states needed to carry out the word reordering example in Fig.', '1 is given in Fig.', '2.', 'The 13 positions of the source sentence are processed in the order shown.', 'A position is presented by the word at that position.', 'Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) !', '(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.', 'To be short, we omit the target words e; e0 in the formulation of the search hypotheses.', 'There are 13 types of extensions needed to describe the verbgroup reordering.', 'The details are given in (Tillmann, 2000).', 'For each extension a new position is added to the coverage set.', 'Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $).', 'Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.', 'The search starts in the hypothesis (I; f;g; 0).', 'f;g denotes the empty set, where no source sentence position is covered.', 'The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Ã\x86;e00 np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).', 'f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ô\x80\x80\x80L; ; Jg.', 'The final score is obtained from: max e;e0 j2fJô\x80\x80\x80L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.', 'The complexity of the quasimonotone search is O(E3 J (R2+LR)).', 'The proof is given in (Tillmann, 2000).', '3.2 Reordering with IBM Style.', 'Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).', 'A detailed description of the search procedure used is given in this patent.', 'Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.', 'A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.', 'Here, we process only full-form words within the translation procedure.', 'the number of permutations carried out for the word reordering is given.', 'During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.', 'Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.', 'The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.', 'This number must be less than or equal to n ô\x80\x80\x80 1.', 'Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.', 'Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.', 'In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.', 'Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.', 'A dynamic programming recursion similar to the one in Eq. 2 is evaluated.', 'In this case, we have no finite-state restrictions for the search space.', 'The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.', 'This approach leads to a search procedure with complexity O(E3 J4).', 'The proof is given in (Tillmann, 2000).', '4.1 The Task and the Corpus.', 'We have tested the translation system on the Verbmobil task (Wahlster 1993).', 'The Verbmobil task is an appointment scheduling task.', 'Two subjects are each given a calendar and they are asked to schedule a meeting.', 'The translation direction is from German to English.', 'A summary of the corpus used in the experiments is given in Table 3.', 'The perplexity for the trigram language model used is 26:5.', 'Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.', 'Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.', 'For the experiments, we use a simple preprocessing step.', 'German city names are replaced by category markers.', 'The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.', 'Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).', 'German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.', 'Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.', 'The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.', 'On average, 6 reference translations per automatic translation are available.', 'The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.', 'This measure has the advantage of being completely automatic.', 'SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.', 'For the error counts, a range from 0:0 to 1:0 is used.', 'An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.', '4.3 Translation Experiments.', 'For the translation experiments, Eq. 2 is recursively evaluated.', 'We apply a beam search concept as in speech recognition.', 'However there is no global pruning.', 'Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.', 'Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited.', 'For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃ\x86cient to consider only the best 50 words.', 'We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.', 'Table 4 shows translation results for the three approaches.', 'The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).', 'Here, the pruning threshold t0 = 10:0 is used.', 'Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).', 'The monotone search performs worst in terms of both error rates mWER and SSER.', 'The computing time is low, since no reordering is carried out.', 'The quasi-monotone search performs best in terms of both error rates mWER and SSER.', 'Additionally, it works about 3 times as fast as the IBM style search.', 'For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.', 'The effect of the pruning threshold t0 is shown in Table 5.', 'The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.', 'The negative logarithm of t0 is reported.', 'The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.', 'Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.', 'Decreasing the threshold results in higher mWER due to additional search errors.', 'Table 5: Effect of the beam threshold on the number of search errors (147 sentences).', 'Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.', 'Again, the monotone search performs worst.', 'In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.', ""The German finite verbs 'bin' (second example) and 'k\x7fonnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions)."", 'In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.', 'In this paper, we have presented a new, eÃ\x86cient DP-based search procedure for statistical machine translation.', 'The approach assumes that the word reordering is restricted to a few positions in the source sentence.', 'The approach has been successfully tested on the 8 000-word Verbmobil task.', 'Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.', '2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.', '3) A tight coupling with the speech recognizer output.', 'This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.', 'Table 6: Example Translations for the Verbmobil task.', 'Input: Ja , wunderbar . K\x7fonnen wir machen . MonS: Yes, wonderful.', 'Can we do . QmS: Yes, wonderful.', 'We can do that . IbmS: Yes, wonderful.', 'We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie w\x7fare es denn am \x7fahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about \x7fahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten k\x7fonnten , w\x7fare das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .']",extractive +C02-1025,C02-1025,7,198,"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P87-1015_swastika,P87-1015,2,2,"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.","['CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS GRAMMATICAL FORMALISMS*', 'We consider the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.', 'In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.', 'Much of the study of grammatical systems in computational linguistics has been focused on the weak generative capacity of grammatical formalism.', 'Little attention, however, has been paid to the structural descriptions that these formalisms can assign to strings, i.e. their strong generative capacity.', 'This aspect of the formalism is both linguistically and computationally important.', ""For example, Gazdar (1985) discusses the applicability of Indexed Grammars (IG's) to Natural Language in terms of the structural descriptions assigned; and Berwick (1984) discusses the strong generative capacity of Lexical-Functional Grammar (LFG) and Government and Bindings grammars (GB)."", ""The work of Thatcher (1973) and Rounds (1969) define formal systems that generate tree sets that are related to CFG's and IG's."", ""We consider properties of the tree sets generated by CFG's, Tree Adjoining Grammars (TAG's), Head Grammars (HG's), Categorial Grammars (CG's), and IG's."", 'We examine both the complexity of the paths of trees in the tree sets, and the kinds of dependencies that the formalisms can impose between paths.', 'These two properties of the tree sets are not only linguistically relevant, but also have computational importance.', ""By considering derivation trees, and thus abstracting away from the details of the composition operation and the structures being manipulated, we are able to state the similarities and differences between the 'This work was partially supported by NSF grants MCS42-19116-CER, MCS82-07294 and DCR-84-10413, ARO grant DAA 29-84-9-0027, and DARPA grant N00014-85-K0018."", 'We are very grateful to Tony Kroc.h, Michael Pails, Sunil Shende, and Mark Steedman for valuable discussions. formalisms.', 'It is striking that from this point of view many formalisms can be grouped together as having identically structured derivation tree sets.', ""This suggests that by generalizing the notion of context-freeness in CFG's, we can define a class of grammatical formalisms that manipulate more complex structures."", ""In this paper, we outline how such family of formalisms can be defined, and show that like CFG's, each member possesses a number of desirable linguistic and computational properties: in particular, the constant growth property and polynomial recognizability."", ""From Thatcher's (1973) work, it is obvious that the complexity of the set of paths from root to frontier of trees in a local set (the tree set of a CFG) is regular'."", 'We define the path set of a tree 1 as the set of strings that label a path from the root to frontier of 7.', 'The path set of a tree set is the union of the path sets of trees in that tree set.', ""It can be easily shown from Thatcher's result that the path set of every local set is a regular set."", ""As a result, CFG's can not provide the structural descriptions in which there are nested dependencies between symbols labelling a path."", ""For example, CFG's cannot produce trees of the form shown in Figure 1 in which there are nested dependencies between S and NP nodes appearing on the spine of the tree."", 'Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish.', 'He also argues that paired English complementizers may also require structural descriptions whose path sets have nested dependencies.', ""Head Grammars (HG's), introduced by Pollard (1984), is a formalism that manipulates headed strings: i.e., strings, one of whose symbols is distinguished as the head."", 'Not only is concatenation of these strings possible, but head wrapping can be used to split a string and wrap it around another string.', ""The productions of HG's are very similar to those of CFG's except that the operation used must be made explicit."", ""Thus, the tree sets generated by HG's are similar to those of CFG's, with each node annotated by the operation (concatenation or wrapping) used to combine the headed strings derived by the daughters of Tree Adjoining Grammars, a tree rewriting formalism, was introduced by Joshi, Levy and Takahashi (1975) and Joshi (1983/85)."", 'A TAG consists of a finite set of elementary trees that are either initial trees or auxiliary trees.', 'Trees are composed using an operation called adjoining, which is defined as follows.', 'Let n be some node labeled X in a tree -y (see Figure 3).', 'Let 71 be a tree with root and foot labeled by X.', ""When 7' is adjoined at ?I in the tree 7 we obtain a tree v"."", ""The subtree under,; is excised from 7, the tree 7' is inserted in its place and the excised subtree is inserted below the foot of y'."", 'It can be shown that the path set of the tree set generated by a TAG G is a context-free language.', ""TAG's can be used to give the structural descriptions discussed by Gazdar (1985) for the unbounded nested dependencies in Norwedish, for cross serial dependencies in Dutch subordinate clauses, and for the nestings of paired English complementizers."", ""From the definition of TAG's, it follows that the choice of adjunction is not dependent on the history of the derivation."", ""Like CFG's, the choice is predetermined by a finite number of rules encapsulated in the grammar."", ""Thus, the derivation trees for TAG's have the same structure as local sets."", ""As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred."", 'We can define derivation trees inductively on the length of the derivation of a tree 1.', 'If 7 is an elementary tree, the derivation tree consists of a single node labeled 7.', ""Suppose -y results from the adjunction of 71, ,-y, at the k distinct tree addresses 141, , nk in some elementary tree 7', respectively."", ""The tree denoting this derivation of 7 is rooted with a node labeled 7' having k subtrees for the derivations of 71, ... ,7a."", 'The edge from the root to the subtree for the derivation of 7i is labeled by the address ni.', 'To show that the derivation tree set of a TAG is a local set, nodes are labeled by pairs consisting of the name of an elementary tree and the address at which it was adjoined, instead of labelling edges with addresses.', ""The following rule corresponds to the above derivation, where 71, , 7k are derived from the auxiliary trees , , fik, respectively. for all addresses n in some elementary tree at which 7' can be adjoined."", ""If 7' is an initial tree we do not include an address on the left-hand side."", ""There has been recent interest in the application of Indexed Grammars (IG's) to natural languages."", ""Gazdar (1985) considers a number of linguistic analyses which IG's (but not CFG's) can make, for example, the Norwedish example shown in Figure 1."", ""The work of Rounds (1969) shows that the path sets of trees derived by IG's (like those of TAG's) are context-free languages."", ""Trees derived by IG's exhibit a property that is not exhibited by the trees sets derived by TAG's or CFG's."", 'Informally, two or more paths can be dependent on each other: for example, they could be required to be of equal length as in the trees in Figure 4. generates such a tree set.', ""We focus on this difference between the tree sets of CFG's and IG's, and formalize the notion of dependence between paths in a tree set in Section 3."", 'An IG can be viewed as a CFG in which each nonterminal is associated with a stack.', 'Each production can push or pop symbols on the stack as can be seen in the following productions that generate tree of the form shown in Figure 4b.', 'Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination.', ""Analogous to the sharing of stacks in IC's, Lexical-Functional Grammar's (LFG's) use the unification of unbounded hierarchical structures."", ""Unification is used in LFG's to produce structures having two dependent spines of unbounded length as in Figure 5."", 'Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to describe crossed-serial dependencies in Dutch subordinate clauses.', ""Gazdar (1985) considers a restriction of IG's in which no more than one nonterminal on the right-hand-side of a production can inherit the stack from the left-hand-side."", 'Unbounded dependencies between branches are not possible in such a system.', ""TAG's can be shown to be equivalent to this restricted system."", ""Thus, TAG's can not give analyses in which dependencies between arbitrarily large branches exist."", 'Steedman (1986) considers Categorial Grammars in which both the operations of function application and composition may be used, and in which function can specify whether they take their arguments from their right or left.', ""While the generative power of CG's is greater that of CFG's, it appears to be highly constrained."", ""Hence, their relationship to formalisms such as HG's and TAG's is of interest."", 'On the one hand, the definition of composition in Steedman (1985), which technically permits composition of functions with unbounded number of arguments, generates tree sets with dependent paths such as those shown in Figure 6.', 'This kind of dependency arises from the use of the composition operation to compose two arbitrarily large categories.', 'This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be combined and used to influence the later derivation.', ""A consequence of the ability to generate tree sets with this property is that CC's under this definition can generate the following language which can not be generated by either TAG's or HG's."", ""0n0'i'i0'2"bin242bn I n = 711 + n2 } On the other hand, no linguistic use is made of this general form of composition and Steedman (personal communication) and Steedman (1986) argues that a more limited definition of composition is more natural."", 'With this restriction the resulting tree sets will have independent paths.', ""The equivalence of CC's with this restriction to TAG's and HG's is, however, still an open problem."", 'An extension of the TAG system was introduced by Joshi et al. (1975) and later redefined by Joshi (1987) in which the adjunction operation is defined on sets of elementary trees rather than single trees.', 'A multicomponent Tree Adjoining Grammar (MCTAG) consists of a finite set of finite elementary tree sets.', 'We must adjoin all trees in an auxiliary tree set together as a single step in the derivation.', 'The adjunction operation with respect to tree sets (multicomponent adjunction) is defined as follows.', 'Each member of a set of trees can be adjoined into distinct nodes of trees in a single elementary tree set, i.e, derivations always involve the adjunction of a derived auxiliary tree set into an elementary tree set.', ""Lilo CFG's, TAG's, and HG's the derivation tree set of a MCTAG will be a local set."", 'The derivation trees of a MCTAG are similar to those of a TAG.', 'Instead of the names of elementary trees of a TAG, the nodes are labeled by a sequence of names of trees in an elementary tree set.', 'Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address.', 'The following context-free production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address c).', '((fii, Q2, Pa) , —■ (01, i32, 03) , e),(02,e) Oa, en) The path complexity of the tee set generated by a MCTAG is not necessarily context-free.', ""Like the string languages of MCTAG's, the complexity of the path set increases as the cardinality of the elementary tee sets increases, though both the string languages and path sets will always be semilinear."", ""MCTAG's are able to generate tee sets having dependent paths."", 'For example, the MCTAG shown in Figure 7 generates trees of the form shown in Figure 4b.', 'The number of paths that can be dependent is bounded by the grammar (in fact the maximum cardinality of a tree set determines this bound).', ""Hence, trees shown in Figure 8 can not be generated by any MCTAG (but can be generated by an IG) because the number of pairs of dependent paths grows with n. Since the derivation tees of TAG's, MCTAG's, and HG's are local sets, the choice of the structure used at each point in a derivation in these systems does not depend on the context at that point within the derivation."", ""Thus, as in CFG's, at any point in the derivation, the set of structures that can be applied is determined only by a finite set of rules encapsulated by the grammar."", 'We characterize a class of formalisms that have this property in Section 4.', 'We loosely describe the class of all such systems as Linear Context-Free Rewriting Formalisms.', 'As is described in Section 4, the property of having a derivation tree set that is a local set appears to be useful in showing important properties of the languages generated by the formalisms.', ""The semilinearity of Tree Adjoining Languages (TAL's), MCTAL's, and Head Languages (HL's) can be proved using this property, with suitable restrictions on the composition operations."", 'Roughly speaking, we say that a tee set contains trees with dependent paths if there are two paths p., = vim., and g., = in each 7 E r such that v., is some, possibly empty, shared initial subpath; v., and wi are not bounded in length; and there is some "dependence" (such as equal length) between the set of all v., and w., for each 7 Er.', 'A tree set may be said to have dependencies between paths if some "appropriate" subset can be shown to have dependent paths as defined above.', 'We attempt to formalize this notion in terms of the tee pumping lemma which can be used to show that a tee set does not have dependent paths.', 'Thatcher (1973) describes a tee pumping lemma for recognizable sets related to the string pumping lemma for regular sets.', 'The tee in Figure 9a can be denoted by t1 i223 where tee substitution is used instead of concatenation.', 'The tee pumping lemma states that if there is tree, t = 22 t2t3, generated by a CFG G, whose height is more than a predetermined bound k, then all trees of the form ti tP3 for each i > 0 will also generated by G (as shown in Figure 9b).', ""The string pumping lemma for CFG's (uvwxy-theorem) can be seen as a corollary of this lemma. from this pumping lemma: a single path can be pumped independently."", 'For example, let us consider a tree set containing trees of the form shown in Figure 4a.', 'The tree t2 must be on one of the two branches.', 'Pumping t2 will change only one branch and leave the other branch unaffected.', 'Hence, the resulting trees will no longer have two branches of equal size.', ""We can give a tree pumping lemma for TAG's by adapting the uvwxy-theorem for CFL's since the tree sets of TAG's have independent and context-free paths."", 'This pumping lemma states that if there is tree, t = t2t3t4t5, generated by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form ti it tstt ts for each i > 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given.', 'We adapt the string pumping lemma for the class of languages corresponding to the complexity of the path set.', 'A geometrical progression of language families defined by Weir (1987) involves tree sets with increasingly complex path sets.', 'The independence of paths in the tree sets of the k tI grammatical formalism in this hierarchy can be shown by means of tree pumping lemma of the form t1ti3t .', '.', '.t The path set of tree sets at level k +1 have the complexity of the string language of level k. The independence of paths in a tree set appears to be an important property.', 'A formalism generating tree sets with complex path sets can still generate only semilinear languages if its tree sets have independent paths, and semilinear path sets.', 'For example, the formalisms in the hierarchy described above generate semilinear languages although their path sets become increasingly more complex as one moves up the hierarchy.', 'From the point of view of recognition, independent paths in the derivation structures suggests that a top-down parser (for example) can work on each branch independently, which may lead to efficient parsing using an algorithm based on the Divide and Conquer technique.', 'From the discussion so far it is clear that a number of formalisms involve some type of context-free rewriting (they have derivation trees that are local sets).', 'Our goal is to define a class of formal systems, and show that any member of this class will possess certain attractive properties.', ""In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semilinearity and polynomial recognition of these systems follows."", ""In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs."", 'To be a member of LCFRS a formalism must satisfy two restrictions.', 'First, any grammar must involve a finite number of elementary structures, composed using a finite number of composition operations.', 'These operations, as we see below, are restricted to be size preserving (as in the case of concatenation in CFG) which implies that they will be linear and non-erasing.', 'A second restriction on the formalisms is that choices during the derivation are independent of the context in the derivation.', ""As will be obvious later, their derivation tree sets will be local sets as are those of CFG's."", 'Each derivation of a grammar can be represented by a generalized context-free derivation tree.', 'These derivation trees show how the composition operations were used to derive the final structures from elementary structures.', 'Nodes are annotated by the name of the composition operation used at that step in the derivation.', ""As in the case of the derivation trees of CFG's, nodes are labeled by a member of some finite set of symbols (perhaps only implicit in the grammar as in TAG's) used to denote derived structures."", 'Frontier nodes are annotated by zero arty functions corresponding to elementary structures.', ""Each treelet (an internal node with all its children) represents the use of a rule that is encapsulated by the grammar The grammar encapsulates (either explicitly or implicitly) a finite number of rules that can be written as follows: n > 0 In the case of CFG's, for each production In the case of TAG's, a derivation step in which the derived trees RI, • • • , On are adjoined into fi at rhe addresses • • • • in. would involve the use of the following rule2."", ""The composition operations in the case of CFG's are parameterized by the productions."", ""In TAG's the elementary tree and addresses where adjunction takes place are used to instantiate the operation."", 'To show that the derivation trees of any grammar in LCFRS is a local set, we can rewrite the annotated derivation trees such that every node is labelled by a pair to include the composition operations.', ""These systems are similar to those described by Pollard (1984) as Generalized Context-Free Grammars (GCFG's)."", ""Unlike GCFG's, however, the composition operations of LCFRS's are restricted to be linear (do not duplicate unboundedly large structures) and nonerasing (do not erase unbounded structures, a restriction made in most modern transformational grammars)."", ""These two restrictions impose the constraint that the result of composing any two structures should be a structure whose "size" is the sum of its constituents plus some constant For example, the operation 4, discussed in the case of CFG's (in Section 4.1) adds the constant equal to the sum of the length of the strings VI, un+r• Since we are considering formalisms with arbitrary structures it is difficult to precisely specify all of the restrictions on the composition operations that we believe would appropriately generalize the concatenation operation for the particular structures used by the formalism."", ""In considering recognition of LCFRS's, we make further assumption concerning the contribution of each structure to the input string, and how the composition operations combine structures in this respect."", ""We can show that languages generated by LCFRS's are semilinear as long as the composition operation does not remove any terminal symbols from its arguments."", 'Semilinearity and the closely related constant growth property (a consequence of semilinearity) have been discussed in the context of grammars for natural languages by Joshi (1983/85) and Berwick and Weinberg (1984).', 'Roughly speaking, a language, L, has the property of semilinearity if the number of occurrences of each symbol in any string is a linear combination of the occurrences of these symbols in some fixed finite set of strings.', 'Thus, the length of any string in L is a linear combination of the length of strings in some fixed finite subset of L, and thus L is said to have the constant growth property.', 'Although this property is not structural, it depends on the structural property that sentences can be built from a finite set of clauses of bounded structure as noted by Joshi (1983/85).', 'The property of semilinearity is concerned only with the occurrence of symbols in strings and not their order.', 'Thus, any language that is letter equivalent to a semilinear language is also semilinear.', 'Two strings are letter equivalent if they contain equal number of occurrences of each terminal symbol, and two languages are letter equivalent if every string in one language is letter equivalent to a string in the other language and vice-versa.', ""Since every CFL is known to be semilinear (Parikh, 1966), in order to show semilinearity of some language, we need only show the existence of a letter equivalent CFL Our definition of LCFRS's insists that the composition operations are linear and nonerasing."", 'Hence, the terminal symbols appearing in the structures that are composed are not lost (though a constant number of new symbols may be introduced).', 'If 0(A) gives the number of occurrences of each terminal in the structure named by A, then, given the constraints imposed on the formalism, for each rule A --. fp(Ai, , An) we have the equality where c„ is some constant.', 'We can obtain a letter equivalent CFL defined by a CFG in which the for each rule as above, we have the production A —* A1 Anup where tk (up) = cp.', 'Thus, the language generated by a grammar of a LCFRS is semilinear.', ""We now turn our attention to the recognition of string languages generated by these formalisms (LCFRL's)."", ""As suggested at the end of Section 3, the restrictions that have been specified in the definition of LCFRS's suggest that they can be efficiently recognized."", 'In this section for the purposes of showing that polynomial time recognition is possible, we make the additional restriction that the contribution of a derived structure to the input string can be specified by a bounded sequence of substrings of the input.', 'Since each composition operation is linear and nonerasing, a bounded sequences of substrings associated with the resulting structure is obtained by combining the substrings in each of its arguments using only the concatenation operation, including each substring exactly once.', ""CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions."", ""Giving a recognition algorithm for LCFRL's involves describing the substrings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings."", ""For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts another substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3)."", 'We can represent any derived tree of a TAG by the two substrings that appear in its frontier, and then define how the adjunction operation concatenates the substrings.', ""Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of substrings it spans, and the effect of the composition operations on sequences of substrings."", ""A derived structure will be mapped onto a sequence zi of substrings (not necessarily contiguous in the input), and the composition operations will be mapped onto functions that can defined as follows3. f((zi,• • • , zni), (m.,• • • ,Yn3)) = (Z1, • • • , Zn3) where each z, is the concatenation of strings from z,'s and yk's."", 'The linear and nonerasing assumptions about the operations discussed in Section 4.1 require that each z, and yk is used exactly once to define the strings zi, ,z1,3.', 'Some of the operations will be constant functions, corresponding to elementary structures, and will be written as f () = zi), where each z, is a constant, the string of terminal symbols al an,,,.', 'This representation of structures by substrings and the composition operation by its effect on substrings is related to the work of Rounds (1985).', ""Although embedding this version of LCFRS's in the framework of ILFP developed by Rounds (1985) is straightforward, our motivation was to capture properties shared by a family of grammatical systems and generalize them defining a class of related formalisms."", 'This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition operations that use a finite number of symbols.', 'With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time.', 'We use Alternating Turing Machines (Chandra, Kozen, and Stockmeyer, 1981) to show that polynomial time recognition is possible for the languages discussed in Section 4.3.', 'An ATM has two types of states, existential and universal.', 'In an existential state an ATM behaves like a nondeterministic TM, accepting if one of the applicable moves leads to acceptance; in an universal state the ATM accepts if all the applicable moves lead to acceptance.', 'An ATM may be thought of as spawning independent processes for each applicable move.', 'A k-tape ATM, M, has a read-only input tape and k read-write work tapes.', 'A step of an ATM consists of reading a symbol from each tape and optionally moving each head to the left or right one tape cell.', 'A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head.', 'The space of a configuration is the sum of the lengths of the nonblank tape contents of the k work tapes.', 'M works in space S(n) if for every string that M accepts no configuration exceeds space S(n).', 'It has been shown in (Chandra et al., 1981) that if M works in space log n then there is a deterministic TM which accepts the same language in polynomial time.', 'In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time.', 'We define an ATM, M, recognizing a language generated by a grammar, G, having the properties discussed in Section 43.', 'It can be seen that M performs a top-down recognition of the input al ... nin logspace.', 'The rewrite rules and the definition of the composition operations may be stored in the finite state control since G uses a finite number of them.', 'Suppose M has to determine whether the k substrings ,.. .,ak can be derived from some symbol A.', 'Since each zi is a contiguous substring of the input (say ai,), and no two substrings overlap, we can represent zi by the pair of integers (i2, i2).', 'We assume that M is in an existential state qA, with integers i1 and i2 representing zi in the (2i — 1)th and 22th work tape, for 1 < i < k. For each rule p : A fp(B, C) such that fp is mapped onto the function fp defined by the following rule. jp((xi,.. • ,rnt), (1ii, • • • • Yn3))= (Zi , • • • , Zk) M breaks xi , zk into substrings xi, , xn, and yi,...,y" conforming to the definition of fp.', 'M spawns as many processes as there are ways of breaking up ri , .. • , zt, and rules with A on their left-hand-side.', 'Each spawned process must check if xi , , xn, and , yn, can be derived from B and C, respectively.', ""To do this, the x's and y's are stored in the next 2ni + 2n2 tapes, and M goes to a universal state."", 'Two processes are spawned requiring B to derive z,.., and C to derive yi , , y,.', 'Thus, for example, one successor process will be have M to be in the existential state qa with the indices encoding xi , , xn, in the first 2n i tapes.', 'For rules p : A fpo such that fp is constant function, giving an elementary structure, fp is defined such that fp() = (Si ... xi() where each z is a constant string.', 'M must enter a universal state and check that each of the k constant substrings are in the appropriate place (as determined by the contents of the first 2k work tapes) on the input tape.', 'In addition to the tapes required to store the indices, M requires one work tape for splitting the substrings.', 'Thus, the ATM has no more than 6km" + 1 work tapes, where km" is the maximum number of substrings spanned by a derived structure.', 'Since the work tapes store integers (which can be written in binary) that never exceed the size of the input, no configuration has space exceeding 0(log n).', 'Thus, M works in logspace and recognition can be done on a deterministic TM in polynomial tape.', 'We have studied the structural descriptions (tree sets) that can be assigned by various grammatical systems, and classified these formalisms on the basis of two features: path complexity; and path independence.', ""We contrasted formalisms such as CFG's, HG's, TAG's and MCTAG's, with formalisms such as IG's and unificational systems such as LFG's and FUG's."", 'We address the question of whether or not a formalism can generate only structural descriptions with independent paths.', 'This property reflects an important aspect of the underlying linguistic theory associated with the formalism.', 'In a grammar which generates independent paths the derivations of sibling constituents can not share an unbounded amount of information.', 'The importance of this property becomes clear in contrasting theories underlying GPSG (Gazdar, Klein, Pulluna, and Sag, 1985), and GB (as described by Berwick, 1984) with those underlying LFG and FUG.', ""It is interesting to note, however, that the ability to produce a bounded number of dependent paths (where two dependent paths can share an unbounded amount of information) does not require machinery as powerful as that used in LFG, FUG and IG's."", ""As illustrated by MCTAG's, it is possible for a formalism to give tree sets with bounded dependent paths while still sharing the constrained rewriting properties of CFG's, HG's, and TAG's."", 'In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the structures and operations used by the system.', ""The similarities become apparent when they are studied at the level of derivation structures: derivation nee sets of CFG's, HG's, TAG's, and MCTAG's are all local sets."", 'Independence of paths at this level reflects context freeness of rewriting and suggests why they can be recognized efficiently.', 'As suggested in Section 4.3.2, a derivation with independent paths can be divided into subcomputations with limited sharing of information.', 'We outlined the definition of a family of constrained grammatical formalisms, called Linear Context-Free Rewriting Systems.', ""This family represents an attempt to generalize the properties shared by CFG's, HG's, TAG's, and MCTAG's."", ""Like HG's, TAG's, and MCTAG's, members of LCFRS can manipulate structures more complex than terminal strings and use composition operations that are more complex that concatenation."", ""We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered."", 'The operations must be linear and nonerasing, i.e., they can not duplicate or erase structure from their arguments.', ""Notice that even though IG's and LFG's involve CFG-like productions, they are (linguistically) fundamentally different from CFG's because the composition operations need not be linear."", ""By sharing stacks (in IG's) or by using nonlinear equations over f-structures (in FUG's and LFG's), structures with unbounded dependencies between paths can be generated."", ""LCFRS's share several properties possessed by the class of mildly context-sensitive formalisms discussed by Joshi (1983/85)."", 'The results described in this paper suggest a characterization of mild context-sensitivity in terms of generalized context-freeness.', ""Having defined LCFRS's, in Section 4.2 we established the semilinearity (and hence constant growth property) of the languages generated."", 'In considering the recognition of these languages, we were forced to be more specific regarding the relationship between the structures derived by these formalisms and the substrings they span.', 'We insisted that each structure dominates a bounded number of (not necessarily adjacent) substrings.', 'The composition operations are mapped onto operations that use concatenation to define the substrings spanned by the resulting structures.', 'We showed that any system defined in this way can be recognized in polynomial time.', 'Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985).', 'However, in order to capture the properties of various grammatical systems under consideration, our notation is more restrictive that ILFP, which was designed as a general logical notation to characterize the complete class of languages that are recognizable in polynomial time.', ""It is known that CFG's, HG's, and TAG's can be recognized in polynomial time since polynomial time algorithms exist in for each of these formalisms."", ""A corollary of the result of Section 4.3 is that polynomial time recognition of MCTAG's is possible."", 'As discussed in Section 3, independent paths in tree sets, rather than the path complexity, may be crucial in characterizing semilinearity and polynomial time recognition.', 'We would like to relax somewhat the constraint on the path complexity of formalisms in LCFRS.', 'Formalisms such as the restricted indexed grammars (Gazdar, 1985) and members of the hierarchy of grammatical systems given by Weir (1987) have independent paths, but more complex path sets.', 'Since these path sets are semilinear, the property of independent paths in their tree sets is sufficient to cause semilinearity of the languages generated by them.', ""In addition, the restricted version of CG's (discussed in Section 6) generates tree sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir)."", ""LCFRS's have only been loosely defined in this paper; we have yet to provide a complete set of formal properties associated with members of this class."", ""In this paper, our goal has been to use the notion of LCFRS's to classify grammatical systems on the basis of their strong generative capacity."", 'In considering this aspect of a formalism, we hope to better understand the relationship between the structural descriptions generated by the grammars of a formalism, and the properties of semilinearity and polynomial recognizability.']",abstractive -C02-1025,C02-1025,6,10,The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive -W06-3114_sweta,W06-3114,1,170,"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.",We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.,"['Manual and Automatic Evaluation of Machine Translation between European Languages', 'Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .', '0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt', 'was done by the participants.', 'This revealed interesting clues about the properties of automatic and manual scoring.', '• We evaluated translation from English, in addition to into English.', 'English was again paired with German, French, and Spanish.', 'We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.', 'The evaluation framework for the shared task is similar to the one used in last year’s shared task.', 'Training and testing is based on the Europarl corpus.', 'Figure 1 provides some statistics about this corpus.', 'To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.', 'To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.', 'We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.', 'There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.', 'Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.', 'The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.', 'Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.', 'In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.', 'We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.', 'For statistics on this test set, refer to Figure 1.', 'The out-of-domain test set differs from the Europarl data in various ways.', 'The text type are editorials instead of speech transcripts.', 'The domain is general politics, economics and science.', 'However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.', 'We received submissions from 14 groups from 11 institutions, as listed in Figure 2.', 'Most of these groups follow a phrase-based statistical approach to machine translation.', 'Microsoft’s approach uses dependency trees, others use hierarchical phrase models.', 'Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.', 'About half of the participants of last year’s shared task participated again.', 'The other half was replaced by other participants, so we ended up with roughly the same number.', 'Compared to last year’s shared task, the participants represent more long-term research efforts.', 'This may be the sign of a maturing research environment.', 'While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible.', 'For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.', 'For the automatic evaluation, we used BLEU, since it is the most established metric in the field.', 'The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.', 'It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.', 'The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).', 'However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.', 'They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.', 'The development of automatic scoring methods is an open field of research.', 'It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.', 'At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.', 'We computed BLEU scores for each submission with a single reference translation.', 'For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.', 'By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.', 'Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.', 'Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.', 'Hence, we use the bootstrap resampling method described by Koehn (2004).', 'Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.', 'When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.', 'Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.', 'If two systems’ scores are close, this may simply be a random effect in the test data.', 'To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.', 'If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.', 'The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.', 'We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.', 'We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.', 'The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.', 'Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?', 'We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems.', 'While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.', 'Many human evaluation metrics have been proposed.', 'Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.', 'The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.', 'In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.', 'Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.', 'We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.', 'We settled on contrastive evaluations of 5 system outputs for a single test sentence.', 'See Figure 3 for a screenshot of the evaluation tool.', 'Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.', 'The judgements tend to be done more in form of a ranking of the different systems.', 'We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.', 'While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.', 'Our initial experimentation with the evaluation tool showed that this is often too overwhelming.', 'Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.', 'Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.', 'Judges where excluded from assessing the quality of MT systems that were submitted by their institution.', 'Sentences and systems were randomly selected and randomly shuffled for presentation.', 'We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.', 'This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.', 'This decreases the statistical significance of our results compared to those studies.', 'The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.', 'The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:', 'Judges varied in the average score they handed out.', 'The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.', 'Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.', 'The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).', 'In words, the judgements are normalized, so that the average normalized judgement per judge is 3.', 'Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.', 'In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.', 'The way judgements are collected, human judges tend to use the scores to rank systems against each other.', 'If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.', 'On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.', 'The judgement of 4 in the first case will go to a vastly better system output than in the second case.', 'We therefore also normalized judgements on a per-sentence basis.', 'The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).', 'Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.', 'Systems that generally do worse than others will receive a negative one.', 'One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.', 'Given the limited number of judgements we received, we did not try to evaluate this.', 'Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.', 'Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.', 'Unfortunately, we have much less data to work with than with the automatic scores.', 'The way we cant distinction between system performance.', 'Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).', 'Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.', 'The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.', 'The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.', 'In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.', 'In all figures, we present the per-sentence normalized judgements.', 'The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.', 'The confidence intervals are computed by bootstrap resampling for BLEU, and by standard significance testing for the manual scores, as described earlier in the paper.', 'Pairwise comparison is done using the sign test.', 'Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.', 'This actually happens quite frequently (more below), so that the rankings are broad estimates.', 'For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.', 'At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.', 'There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.', 'In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.', 'For the automatic scoring method BLEU, we can distinguish three quarters of the systems.', 'While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.', 'For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.', 'More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.', 'We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.', 'The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.', 'Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.', 'So, this was a surprise element due to practical reasons, not malice.', 'All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data.', 'This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5.', 'The manual scores are averages over the raw unnormalized scores.', 'It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.', 'Different sentence structure and rich target language morphology are two reasons for this.', 'Again, we can compute average scores for all systems for the different language pairs (Figure 6).', 'The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.', 'The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).', 'This is because different judges focused on different language pairs.', 'Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.', 'Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU.', 'We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.', 'In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores.', 'Surprisingly, this effect is much less obvious for out-of-domain test data.', 'For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.', 'Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.', 'This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.', 'This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.', 'So, who won the competition?', 'The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.', 'This is not completely surprising, since all systems use very similar technology.', 'For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.', 'The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.', 'The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.', 'This is the first time that we organized a large-scale manual evaluation.', 'While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.', 'For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations.', 'Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.', 'Almost all annotators expressed their preference to move to a ranking-based evaluation in the future.', 'A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?', '(b) does the translation have the same meaning, including connotations?', 'Annotators suggested that long sentences are almost impossible to judge.', 'Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.', 'A few annotators suggested to break up long sentences into clauses and evaluate these separately.', 'Not every annotator was fluent in both the source and the target language.', 'While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.', 'However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).', 'Lack of correct reference translations was pointed out as a short-coming of our evaluation.', 'One annotator suggested that this was the case for as much as 10% of our test sentences.', 'Annotators argued for the importance of having correct and even multiple references.', 'It was also proposed to allow annotators to skip sentences that they are unable to judge.', 'We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.', 'While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.', 'Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.', 'The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.', 'The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.', 'Replacing this with an ranked evaluation seems to be more suitable.', 'Human judges also pointed out difficulties with the evaluation of long sentences.', 'This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No.', 'HR0011-06-C-0022.']",abstractive -D10-1044_swastika,D10-1044,8,151,They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.,"In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.","['Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation', 'We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.', 'This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure.', 'We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.', 'Domain adaptation is a common concern when optimizing empirical NLP applications.', 'Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance.', 'Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data.', 'For developers of Statistical Machine Translation (SMT) systems, an additional complication is the heterogeneous nature of SMT components (word-alignment model, language model, translation model, etc.', '), which precludes a single universal approach to adaptation.', 'In this paper we study the problem of using a parallel corpus from a background domain (OUT) to improve performance on a target domain (IN) for which a smaller amount of parallel training material—though adequate for reasonable performance—is also available.', 'This is a standard adaptation problem for SMT.', 'It is difficult when IN and OUT are dissimilar, as they are in the cases we study.', 'For simplicity, we assume that OUT is homogeneous.', 'The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains.', 'There is a fairly large body of work on SMT adaptation.', 'We introduce several new ideas.', 'First, we aim to explicitly characterize examples from OUT as belonging to general language or not.', 'Previous approaches have tried to find examples that are similar to the target domain.', 'This is less effective in our setting, where IN and OUT are disparate.', 'The idea of distinguishing between general and domain-specific examples is due to Daum´e and Marcu (2006), who used a maximum-entropy model with latent variables to capture the degree of specificity.', 'Daum´e (2007) applies a related idea in a simpler way, by splitting features into general and domain-specific versions.', 'This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT.', 'Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples.', 'Our second contribution is to apply instance weighting at the level of phrase pairs.', 'Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language.', 'For instance, the sentence Similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domain-specific despite the presence of general phrases like were reported in.', 'Phrase-level granularity distinguishes our work from previous work by Matsoukas et al (2009), who weight sentences according to sub-corpus and genre membership.', 'Finally, we make some improvements to baseline approaches.', 'We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.', 'This is a simple and effective alternative to setting weights discriminatively to maximize a metric such as BLEU.', 'A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only.', 'For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.', 'This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results.', 'The paper is structured as follows.', 'Section 2 describes our baseline techniques for SMT adaptation, and section 3 describes the instance-weighting approach.', 'Experiments are presented in section 4.', 'Section 5 covers relevant previous work on SMT adaptation, and section 6 concludes.', 'Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives.', 'The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs.', 'Thus, provided at least this amount of IN data is available—as it is in our setting—adapting these weights is straightforward.', 'We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.', 'We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.', 'The natural baseline approach is to concatenate data from IN and OUT.', 'Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN.', 'When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.', 'An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).', 'This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).', 'Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.', 'This is appropriate in cases where it is sanctioned by Bayes’ law, such as multiplying LM and TM probabilities, but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domain.', 'This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1.', 'Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves.', 'For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this.', 'It is not immediately obvious how to formulate an equivalent to equation (1) for an adapted TM, because there is no well-defined objective for learning TMs from parallel corpora.', 'This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).', 'However, we note that the final conditional estimates p(s|t) from a given phrase table maximize the likelihood of joint empirical phrase pair counts over a word-aligned corpus.', 'This suggests a direct parallel to (1): where ˜p(s, t) is a joint empirical distribution extracted from the IN dev set using the standard procedure.2 An alternative form of linear combination is a maximum a posteriori (MAP) combination (Bacchiani et al., 2004).', ""For the TM, this is: where cI(s, t) is the count in the IN phrase table of pair (s, t), po(s|t) is its probability under the OUT TM, and cI(t) = "s, cI(s', t)."", 'This is motivated by taking β po(s|t) to be the parameters of a Dirichlet prior on phrase probabilities, then maximizing posterior estimates p(s|t) given the IN corpus.', 'Intuitively, it places more weight on OUT when less evidence from IN is available.', 'To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004).', 'The matching sentence pairs are then added to the IN corpus, and the system is re-trained.', 'Although matching is done at the sentence level, this information is subsequently discarded when all matches are pooled.', 'To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model.', 'The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score.', 'The sentence-selection approach is crude in that it imposes a binary distinction between useful and non-useful parts of OUT.', 'Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities.', 'The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.', 'We extend the Matsoukas et al approach in several ways.', 'First, we learn weights on individual phrase pairs rather than sentences.', 'Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects.', 'Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair.', 'Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight.', 'The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t).', 'Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0ˆ = argmax pf(s, t) log pθ(s|t) (8) terize the usefulness of (s, t), weighted by Ai. θ s,t pf(s, t)po(s, t) log pθ(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- θ s,t pf(s, t)co(s, t) log pθ(s|t), set maximum likelihood as before: !�argmax po (s, t) ! θ s,t �ˆ = argmax ˜p(s, t) log p(s|t; 0).', '(7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER.', 'However, it is robust, efficient, and easy to implement.4 To perform the maximization in (7), we used the popular L-BFGS algorithm (Liu and Nocedal, 1989), which requires gradient information.', 'Dropping the conditioning on 0 for brevity, and letting ¯cλ(s, t) = cλ(s, t) + yu(s|t), and ¯cλ(t) = 4Note that the probabilities in (7) need only be evaluated over the support of ˜p(s, t), which is quite small when this distribution is derived from a dev set.', 'Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).', 'This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus.', 'This combination generalizes (2) and (3): we use either at = a to obtain a fixed-weight linear combination, or at = cI(t)/(cI(t) + 0) to obtain a MAP combination.', 'We model po(s|t) using a MAP criterion over weighted phrase-pair counts: and from the similarity to (5), assuming y = 0, we see that wλ(s, t) can be interpreted as approximating pf(s, t)/po(s, t).', 'The logistic function, whose outputs are in [0, 1], forces pp(s, t) <_ po(s, t).', 'This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].', 'We have not yet tried this.', 'An alternate approximation to (8) would be to let w,\\(s, t) directly approximate pˆI(s, t).', 'With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.', 'This variant is tested in the experiments below.', 'A final alternate approach would be to combine weighted joint frequencies rather than conditional estimates, ie: cI(s, t) + w,\\(s, t)co(, s, t), suitably normalized.5 Such an approach could be simulated by a MAP-style combination in which separate 0(t) values were maintained for each t. This would make the model more powerful, but at the cost of having to learn to downweight OUT separately for each t, which we suspect would require more training data for reliable performance.', 'We have not explored this strategy.', 'We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain.', 'The 14 general-language features embody straightforward cues: frequency, “centrality” as reflected in model scores, and lack of burstiness.', 'They are: 5We are grateful to an anonymous reviewer for pointing this out.', '6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.', 'The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation.', 'In addition to using the simple features directly, we also trained an SVM classifier with these features to distinguish between IN and OUT phrase pairs.', 'Phrase tables were extracted from the IN and OUT training corpora (not the dev as was used for instance weighting models), and phrase pairs in the intersection of the IN and OUT phrase tables were used as positive examples, with two alternate definitions of negative examples: The classifier trained using the 2nd definition had higher accuracy on a development set.', 'We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model.', 'We carried out translation experiments in two different settings.', 'The first setting uses the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus (www.statmt.org/europarl) as OUT, for English/French translation in both directions.', 'The dev and test sets were randomly chosen from the EMEA corpus.', 'Figure 1 shows sample sentences from these domains, which are widely divergent.', 'The second setting uses the news-related subcorpora for the NIST09 MT Chinese to English evaluation8 as IN, and the remaining NIST parallel Chinese/English corpora (UN, Hong Kong Laws, and Hong Kong Hansard) as OUT.', 'The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set.', 'The NIST06 and NIST08 evaluation sets were used for testing.', '(Thus the domain of the dev and test corpora matches IN.)', 'Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.', 'The corpora for both settings are summarized in table 1.', 'The reference medicine for Silapo is EPREX/ERYPO, which contains epoetin alfa.', 'Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa.', '— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.', 'Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux.', 'We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.', 'Feature weights were set using Och’s MERT algorithm (Och, 2003).', 'The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.', 'It was filtered to retain the top 30 translations for each source phrase using the TM part of the current log-linear model.', 'Table 2 shows results for both settings and all methods described in sections 2 and 3.', 'The 1st block contains the simple baselines from section 2.1.', 'The natural baseline (baseline) outperforms the pure IN system only for EMEA/EP fren.', 'Log-linear combination (loglin) improves on this in all cases, and also beats the pure IN system.', 'The 2nd block contains the IR system, which was tuned by selecting text in multiples of the size of the EMEA training corpus, according to dev set performance.', 'This significantly underperforms log-linear combination.', 'The 3rd block contains the mixture baselines.', 'The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components.', 'However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST.', 'This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together.', 'Somewhat surprisingly, there do not appear to be large systematic differences between linear and MAP combinations.', 'The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture.', 'The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2.', 'Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial.', 'This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin.', 'The final block in table 2 shows models trained on feature subsets and on the SVM feature described in 3.4.', 'The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature.', 'We have already mentioned the closely related work by Matsoukas et al (2009) on discriminative corpus weighting, and Jiang and Zhai (2007) on (nondiscriminative) instance weighting.', 'It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.', 'Although these authors report better gains than ours, they are with respect to a non-adapted baseline.', 'Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available.', 'It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.', 'At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances.', 'However, for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s1t).', 'As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features.', 'Recent work by Finkel and Manning (2009) which re-casts Daum´e’s approach in a hierarchical MAP framework may be applicable to this problem.', 'Moving beyond directly related work, major themes in SMT adaptation include the IR (Hildebrand et al., 2005; L¨u et al., 2007; Zhao et al., 2004) and mixture (Finch and Sumita, 2008; Foster and Kuhn, 2007; Koehn and Schroeder, 2007; L¨u et al., 2007) approaches for LMs and TMs described above, as well as methods for exploiting monolingual in-domain text, typically by translating it automatically and then performing self training (Bertoldi and Federico, 2009; Ueffing et al., 2007; Schwenk and Senellart, 2009).', 'There has also been some work on adapting the word alignment model prior to phrase extraction (Civera and Juan, 2007; Wu et al., 2005), and on dynamically choosing a dev set (Xu et al., 2007).', 'Other work includes transferring latent topic distributions from source to target language for LM adaptation, (Tam et al., 2007) and adapting features at the sentence level to different categories of sentence (Finch and Sumita, 2008).', 'In this paper we have proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.', 'Each out-of-domain phrase pair is characterized by a set of simple features intended to reflect how useful it will be.', 'The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction).', 'These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table.', 'Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.', 'We obtained positive results using a very simple phrase-based system in two different adaptation settings: using English/French Europarl to improve a performance on a small, specialized medical domain; and using non-news portions of the NIST09 training material to improve performance on the news-related corpora.', 'In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting).', 'In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.', 'We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences.', 'Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs.']",extractive -W99-0623_vardha,W99-0623,6,143,Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.,Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.,"['Exploiting Diversity in Natural Language Processing: Combining Parsers', 'Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy.', 'Two general approaches are presented and two combination techniques are described for each approach.', 'Both parametric and non-parametric models are explored.', 'The resulting parsers surpass the best previously published performance results for the Penn Treebank.', 'The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.', 'The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).', 'Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.', 'The theory has also been validated empirically.', 'Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).', 'In both cases the investigators were able to achieve significant improvements over the previous best tagging results.', 'Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).', 'The corpus-based statistical parsing community has many fast and accurate automated parsing systems, including systems produced by Collins (1997), Charniak (1997) and Ratnaparkhi (1997).', 'These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993).', 'We used these three parsers to explore parser combination techniques.', 'We are interested in combining the substructures of the input parses to produce a better parse.', 'We call this approach parse hybridization.', 'The substructures that are unanimously hypothesized by the parsers should be preserved after combination, and the combination technique should not foolishly create substructures for which there is no supporting evidence.', 'These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.', 'The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.', 'Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination.', ""One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set."", 'If enough parsers suggest that a particular constituent belongs in the parse, we include it.', 'We call this technique constituent voting.', 'We include a constituent in our hypothesized parse if it appears in the output of a majority of the parsers.', 'In our particular case the majority requires the agreement of only two parsers because we have only three.', 'This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.', 'Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.', 'The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes.', 'Our original hope in combining these parsers is that their errors are independently distributed.', 'This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.', 'For this reason, naïve Bayes classifiers are well-matched to this problem.', 'In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.', 'C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.', 'Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.', 'The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.', 'The estimation of the probabilities in the model is carried out as shown in Equation 4.', 'Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument.', 'Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.', 'There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set.', 'Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents.', 'IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.', 'Call the crossing constituents A and B.', 'A receives a votes, and B receives b votes.', 'Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.', 'Let s = a + b.', 'None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents.', 'Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction.', '• Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis.', 'In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.', 'There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses.', 'One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.', 'This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences.', 'For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.', 'If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).', 'We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important.', 'The combining algorithm is presented with the candidate parses and asked to choose which one is best.', 'The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence.', 'We call this approach parser switching.', 'Once again we present both a non-parametric and a parametric technique for this task.', 'First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.', 'We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.', 'This is the parse that is closest to the centroid of the observed parses under the similarity metric.', 'The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers.', 'Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.', 'We model each parse as the decisions made to create it, and model those decisions as independent events.', 'Each decision determines the inclusion or exclusion of a candidate constituent.', 'The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.', 'This is summarized in Equation 5.', 'The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.', ""In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers."", 'It is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsers.', 'The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.', 'We used section 23 as the development set for our combining techniques, and section 22 only for final testing.', 'The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.', ""A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'"", 'The standard measures for evaluating Penn Treebank parsing performance are precision and recall of the predicted constituents.', 'Each parse is converted into a set of constituents represented as a tuples: (label, start, end).', 'The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall.', 'Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.', 'For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.', 'F-measure is the harmonic mean of precision and recall, 2PR/(P + R).', 'It is closer to the smaller value of precision and recall when there is a large skew in their values.', 'We performed three experiments to evaluate our techniques.', 'The first shows how constituent features and context do not help in deciding which parser to trust.', 'We then show that the combining techniques presented above give better parsing accuracy than any of the individual parsers.', 'Finally we show the combining techniques degrade very little when a poor parser is added to the set.', 'It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.', 'For example, one parser could be more accurate at predicting noun phrases than the other parsers.', 'None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference.', 'This is not an oversight.', 'Features and context were initially introduced into the models, but they refused to offer any gains in performance.', 'While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain.', 'Because we are working with only three parsers, the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itself.', 'This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.', 'One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.', 'We call such a constituent an isolated constituent.', 'If we were working with more than three parsers we could investigate minority constituents, those constituents that are suggested by at least one parser, but which the majority of the parsers do not suggest.', 'Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall.', 'Consider for a set of constituents the isolated constituent precision parser metric, the portion of isolated constituents that are correctly hypothesized.', ""When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse."", 'We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes.', 'In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature.', 'The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.', 'In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible.', 'Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.', 'In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents.', 'Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.', 'From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.', 'The results in Table 2 were achieved on the development set.', 'The first two rows of the table are baselines.', 'The first row represents the average accuracy of the three parsers we combine.', ""The second row is the accuracy of the best of the three parsers.'"", 'The next two rows are results of oracle experiments.', 'The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework.', 'It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.', 'The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.', 'Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers.', 'The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization.', 'We do not show the numbers for the Bayes models in Table 2 because the parameters involved were established using this set.', 'The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.', 'Table 3 contains the results for evaluating our systems on the test set (section 22).', 'All of these systems were run on data that was not seen during their development.', 'The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.', 'This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.', 'The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.', 'Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.', 'Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time.', 'Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.', 'In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.', 'The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.', 'It was then tested on section 22 of the Treebank in conjunction with the other parsers.', 'The results of this experiment can be seen in Table 5.', 'The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser.', 'As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.', 'The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.', 'The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.', 'We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.', 'Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.', 'We have presented two general approaches to studying parser combination: parser switching and parse hybridization.', 'For each experiment we gave an nonparametric and a parametric technique for combining parsers.', 'All four of the techniques studied result in parsing systems that perform better than any previously reported.', 'Both of the switching techniques, as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experiments.', 'Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.', 'Combining multiple highly-accurate independent parsers yields promising results.', 'We plan to explore more powerful techniques for exploiting the diversity of parsing methods.', 'We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments.', 'This work was funded by NSF grant IRI-9502312.', 'Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University.']",extractive -P11-1061_swastika,P11-1061,2,2,"Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.","Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.","['Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections', 'We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.', 'Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.', 'We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).', 'Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.', 'Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.', 'Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).', 'However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.', 'Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.', 'Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best.', 'To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.', 'This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).', 'Naseem et al. (2009) and Snyder et al.', '(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.', 'Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.', 'First, we use a novel graph-based framework for projecting syntactic information across language boundaries.', 'To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4).', 'Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.', 'To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).', 'Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.', 'Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.', 'These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.', 'We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements.', 'Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).', 'The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.', 'Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.', 'As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.', 'Graph construction does not require any labeled data, but makes use of two similarity functions.', 'The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).', 'To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.', 'To initialize the graph we tag the English side of the parallel text using a supervised model.', 'By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices.', 'Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).', 'The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).', 'The following three sections elaborate these different stages is more detail.', 'In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003).', 'Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.', 'Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.', 'More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.', 'They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger.', 'We extend Subramanya et al.’s intuitions to our bilingual setup.', 'Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language.', 'The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).', 'On the English side, however, the vertices (denoted by Ve) correspond to word types.', 'Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams.', 'Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training.', 'We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.', 'Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).', 'We briefly review it here for completeness.', 'We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1.', 'Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.', 'For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.', 'This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.', 'Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.', 'Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.', 'We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.', 'To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.', 'Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.', 'Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts.', 'So far the graph has been completely unlabeled.', 'To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.', 'These tag distributions are used to initialize the label distributions over the English vertices in the graph.', 'Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.', 'A very small excerpt from an Italian-English graph is shown in Figure 1.', 'As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.', 'In this particular case, all English vertices are labeled as nouns by the supervised tagger.', 'In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.', 'It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.', 'In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.', 'Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.', 'Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.', 'We use label propagation in two stages to generate soft labels on all the vertices in the graph.', 'In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.', 'Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.', 'This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.', 'We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.', 'It can be shown that this objective is convex in q.', 'The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar.', 'The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).', 'If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.', 'While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.', 'Instead, we resort to an iterative update based method.', 'We formulate the update as follows: where ∀ui ∈ Vf \\ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.', 'We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.', 'This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.', 'We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).', 'For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.', 'This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.', 'In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials.', 'The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.', 'All features were conjoined with the state z.', 'We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.', 'To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).', 'For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.', 'We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model.', 'This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.', 'The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.', 'This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.', '7).', 'It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.', 'However, we do not explore this possibility in the current work.', 'Before presenting our results, we describe the datasets that we used, as well as two baselines.', 'We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side.', 'The availability of these resources guided our selection of foreign languages.', 'For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).', 'The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).', 'Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.', 'Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.', 'However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.', 'We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.', 'We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.', 'We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).', 'While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.', 'For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.', 'The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.', 'The taggers were trained on datasets labeled with the universal tags.', 'The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.', 'In other words, the set of hidden states F was chosen to be the fine set of treebank tags.', 'Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.', 'To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.', 'We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.', 'For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.', 'For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM.', 'This can be seen as a rough approximation of Yarowsky and Ngai (2001).', 'We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq.', '1).', 'Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.', 'Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.', 'Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.', 'Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages.', 'We used C = 1.0 as the L2 regularization constant in (Eq.', '10) and trained both EM and L-BFGS for 1000 iterations.', 'When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.', '7).', 'Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3.', 'For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.', 'For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.', 'The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.', 'Table 2 shows our complete set of results.', 'As expected, the vanilla HMM trained with EM performs the worst.', 'The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).', 'Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.', 'The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.', 'Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.', 'For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.', 'Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages.', 'It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.', 'As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages.', 'Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features.', 'We tabulate this increase in Table 3.', 'For all languages, the vocabulary sizes increase by several thousand words.', 'Although the tag distributions of the foreign words (Eq.', '6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.', 'Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.', 'While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example.', 'Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.', 'As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.', 'As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.', 'We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.', 'Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.', 'Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.', 'Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.', 'We would like to thank Ryan McDonald for numerous discussions on this topic.', 'We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.', 'Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.']",extractive -W11-2123_vardha,W11-2123,3,7,"This paper presents methods to query N-gram language models, minimizing time and space costs.","This paper presents methods to query N-gram language models, minimizing time and space costs.","['KenLM: Faster and Smaller Language Model Queries', 'We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and costs.', 'The structure uses linear probing hash tables and is designed for speed.', 'Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.', 'Our code is thread-safe, and integrated into the Moses, cdec, and Joshua translation systems.', 'This paper describes the several performance techniques used and presents benchmarks against alternative implementations.', 'Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.', 'This paper presents methods to query N-gram language models, minimizing time and space costs.', 'Queries take the form p(wn|wn−1 1 ) where wn1 is an n-gram.', 'Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.', 'The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient.', 'Many packages perform language model queries.', 'Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.', 'IRSTLM 5.60.02 (Federico et al., 2008) is a sorted trie implementation designed for lower memory consumption.', 'MITLM 0.4 (Hsu and Glass, 2008) is mostly designed for accurate model estimation, but can also compute perplexity.', 'RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.', 'BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.', 'Sheffield Guthrie and Hepple (2010) explore several randomized compression techniques, but did not release code.', 'TPT Germann et al. (2009) describe tries with better locality properties, but did not release code.', 'These packages are further described in Section 3.', 'We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives.', 'Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated.', 'Our open-source (LGPL) implementation is also available for download as a standalone package with minimal (POSIX and g++) dependencies.', 'We implement two data structures: PROBING, designed for speed, and TRIE, optimized for memory.', 'The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties.', 'An important subproblem of language model storage is therefore sparse mapping: storing values for sparse keys using little memory then retrieving values given keys using little time.', 'We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.', 'Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant.', 'Keys to the table are hashed, using for example Austin Appleby’s MurmurHash2, to integers evenly distributed over a large range.', 'This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets.', 'Entries landing in the same bucket are said to collide.', 'Several methods exist to handle collisions; we use linear probing because it has less memory overhead when entries are small.', 'Linear probing places at most one entry in each bucket.', 'When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.', 'Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.', 'Non-empty buckets contain an entry belonging to them or to a preceding bucket where a conflict occurred.', 'Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table.', 'Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.', 'The ratio of buckets to entries is controlled by space multiplier m > 1.', 'As the name implies, space is O(m) and linear in the number of entries.', 'The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.', 'When keys are longer than 64 bits, we conserve space by replacing the keys with their 64-bit hashes.', 'With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.', 'Collisions between two keys in the table can be identified at model building time.', 'Further, the special hash 0 suffices to flag empty buckets.', 'The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.', 'Unigram lookup is dense so we use an array of probability and backoff values.', 'For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.', 'Vocabulary lookup is a hash table mapping from word to vocabulary index.', 'In all cases, the key is collapsed to its 64-bit hash.', 'Given counts cn1 where e.g. c1 is the vocabulary size, total memory consumption, in bits, is Our PROBING data structure places all n-grams of the same order into a single giant hash table.', 'This differs from other implementations (Stolcke, 2002; Pauls and Klein, 2011) that use hash tables as nodes in a trie, as explained in the next section.', 'Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations.', 'Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead.', 'SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique.', 'Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.', 'We reduce this to O(log log |A|) time by evenly distributing keys over their range then using interpolation search4 (Perl et al., 1978).', 'Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1.', 'Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully.', 'If e < b then the key is not found.', 'Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.', 'Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution.', 'If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement.', 'The improvement is due to the cost of bit-level reads and avoiding reads that may fall in different virtual memory pages.', 'Vocabulary lookup is a sorted array of 64-bit word hashes.', 'The index in this array is the vocabulary identifier.', 'This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys.', 'While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.', 'The trie data structure is commonly used for language modeling.', 'Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant.', 'Figure 1 shows an example.', 'Nodes in the trie are based on arrays sorted by vocabulary identifier.', 'We maintain a separate array for each length n containing all n-gram entries sorted in suffix order.', 'Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array.', 'The record for wn1 stores the offset at which its extensions begin.', 'Reading the following record’s offset indicates where the block ends.', 'This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option.', 'SRILM inefficiently stores 64-bit pointers.', 'Unigram records store probability, backoff, and an index in the bigram table.', 'Entries for 2 < n < N store a vocabulary identifier, probability, backoff, and an index into the n + 1-gram table.', 'The highestorder N-gram array omits backoff and the index, since these are not applicable.', 'Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.', 'Given n-gram counts {cn}Nn=1, we use Flog2 c1] bits per vocabulary identifier and Flog2 cn] per index into the table of ngrams.', 'When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.', 'In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix.', 'This causes a problem for reverse trie implementations, including SRILM itself, because it leaves n+1-grams without an n-gram node pointing to them.', 'We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc).', 'Queries detect the invalid probability, using the node only if it leads to a longer match.', 'By contrast, BerkeleyLM’s hash and compressed variants will return incorrect results based on an n −1-gram.', 'Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.', 'To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6.', 'We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits).', 'To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin.', 'The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.', 'Unigrams also have 64-bit overhead for vocabulary lookup.', 'Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.', 'The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size.', 'SRILM (Stolcke, 2002) is widely used within academia.', ""It is generally considered to be fast (Pauls 29 − 1 probabilities and 2' − 2 non-zero backoffs. and Klein, 2011), with a default implementation based on hash tables within each trie node."", 'Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory.', 'The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers.', 'With some minor API changes, namely returning the length of the n-gram matched, it could also be faster—though this would be at the expense of an optimization we explain in Section 4.1.', 'The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile.', 'IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.', 'The developers aimed to reduce memory consumption at the expense of time.', 'Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order.', 'However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7.', 'Each trie node contains a sorted array of entries and they use binary search.', 'Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization.', 'Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries.', 'IRSTLM’s quantized variant is the inspiration for our quantized variant.', 'Unfortunately, we were unable to correctly run the IRSTLM quantized variant.', 'The developers suggested some changes, such as building the model from scratch with IRSTLM, but these did not resolve the problem.', 'Our code has been publicly available and intergrated into Moses since October 2010.', 'Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours.', 'Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized.', 'Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2.', 'State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers.', 'The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.', 'While the paper mentioned a sorted variant, code was never released.', 'The compressed variant uses block compression and is rather slow as a result.', 'A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.', 'For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.', 'The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits.', 'Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy.', 'These enable much larger models in memory, compensating for lost accuracy.', 'Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer.', 'Minimal perfect hashing is used to find the index at which a quantized probability and possibly backoff are stored.', 'These models generally outperform our memory consumption but are much slower, even when cached.', 'In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.', 'Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.', 'Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.', 'We call these N − 1 words state.', 'When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis.', 'If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error.', 'Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors.', 'We offer a state function s(wn1) = wn� where substring wn� is guaranteed to extend (to the right) in the same way that wn1 does for purposes of language modeling.', 'The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ).', 'The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word.', 'These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc.', 'We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.', 'Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension.', 'Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found.', 'IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities.', 'State will ultimately be used as context in a subsequent query.', 'If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context.', 'If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.', 'This logic applies recursively: if wnf+1 similarly does not extend and has zero log backoff, it too should be omitted, terminating with a possibly empty context.', 'We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend.', 'RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost.', 'Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.', 'In this section, we extend state to optimize left-to-right queries.', 'All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.', 'Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.', 'To optimize left-to-right queries, we extend state to store backoff information: where m is the minimal context from Section 4.1 and b is the backoff penalty.', 'Because b is a function, no additional hypothesis splitting happens.', 'As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .', 'Each visited entry wni stores backoff b(wni ).', 'These are written to the state s(wn1) and returned so that they can be used for the following query.', 'Saving state allows our code to walk the data structure exactly once per query.', 'Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary.', 'In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1.', 'BerkeleyLM uses states to optimistically search for longer n-gram matches first and must perform twice as many random accesses to retrieve backoff information.', 'Further, it needs extra pointers in the trie, increasing model size by 40%.', 'This makes memory usage comparable to our PROBING model.', 'The PROBING model can perform optimistic searches by jumping to any n-gram without needing state and without any additional memory.', 'However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state.', 'Though we do not directly compare state implementations, performance metrics in Table 1 indicate our overall method is faster.', 'Only IRSTLM does not support threading.', 'In our case multi-threading is trivial because our data structures are read-only and uncached.', 'Memory mapping also allows the same model to be shared across processes on the same machine.', 'Along with IRSTLM and TPT, our binary format is memory mapped, meaning the file and in-memory representation are the same.', 'This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache.', 'Lazy mapping reduces memory requirements by loading pages from disk only as necessary.', 'However, lazy mapping is generally slow because queries against uncached pages must wait for the disk.', 'This is especially bad with PROBING because it is based on hashing and performs random lookups, but it is not intended to be used in low-memory scenarios.', 'TRIE uses less memory and has better locality.', 'However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages.', 'TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.', 'We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM.', 'In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk.', 'This is similar to using the Linux MAP POPULATE flag that is our default loading mechanism.', 'This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses.', 'Our test machine has two Intel Xeon E5410 processors totaling eight cores, 32 GB RAM, and four Seagate Barracuda disks in software RAID 0 running Linux 2.6.18.', 'Sparse lookup is a key subproblem of language model queries.', 'We compare three hash tables: our probing implementation, GCC’s hash set, and Boost’s8 unordered.', 'For sorted lookup, we compare interpolation search, standard C++ binary search, and standard C++ set based on red-black trees.', 'The data structure was populated with 64-bit integers sampled uniformly without replacement.', 'For queries, we uniformly sampled 10 million hits and 10 million misses.', 'The same numbers were used for each data structure.', 'Time includes all queries but excludes random number generation and data structure population.', 'Figure 2 shows timing results.', 'For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.', 'It also uses less memory, with 8 bytes of overhead per entry (we store 16-byte entries with m = 1.5); linked list implementations hash set and unordered require at least 8 bytes per entry for pointers.', 'Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data.', 'Interpolation search has a more expensive pivot but performs less pivoting and reads, so it is slow on small data and faster on large data.', 'This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.', 'However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time.', 'Memory usage is the same as with binary search and lower than with set.', 'For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.', 'The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher.', 'Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory.', 'Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.', 'As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins.', 'For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256.', 'We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009).', 'Tokens were converted to vocabulary identifiers in advance and state was carried from each query to the next.', 'Table 1 shows results of the benchmark.', 'Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text.', 'Therefore, performance is more closely tied to the underlying data structure than to the cache.', 'In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.', 'Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB.', 'The results in Table 1 show PROBING is 81% faster than TRIE, which is in turn 31% faster than the fastest baseline.', 'Memory usage in PROBING is high, though SRILM is even larger, so where memory is of concern we recommend using TRIE, if it fits in memory.', 'For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.', 'Another option is the closedsource data structures from Sheffield (Guthrie and Hepple, 2010).', 'Though we are not able to calculate their memory usage on our model, results reported in their paper suggest lower memory consumption than TRIE on large-scale models, at the expense of CPU time.', 'This task measures how well each package performs in machine translation.', 'We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set.', 'Based on revision 4041, we modified Moses to print process statistics before terminating.', 'Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance).', 'SRILM’s compact variant has an incredibly expensive destructor, dwarfing the time it takes to perform translation, and so we also modified Moses to avoiding the destructor by calling exit instead of returning normally.', 'Since our destructor is an efficient call to munmap, bypassing the destructor favors only other packages.', 'The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.', 'Time starts when Moses is launched and therefore includes model loading time.', 'These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.', 'It did 402 queries/ms using 1.80 GB. cMemory use increased during scoring due to batch processing (MIT) or caching (Rand).', 'The first value reports use immediately after loading while the second reports the increase during scoring. dBerkeleyLM is written in Java which requires memory be specified in advance.', 'Timing is based on plentiful memory.', 'Then we ran binary search to determine the least amount of memory with which it would run.', 'The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory.', 'The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions.', 'Memory usage is likely much lower than ours. fThe original paper (Germann et al., 2009) provided only 2s of query timing and compared with SRI when it exceeded available RAM.', 'The authors provided us with a ratio between TPT and SRI under different conditions. aLossy compression with the same weights. bLossy compression with retuned weights. ditions make the value appropriate for estimating repeated run times, such as in parameter tuning.', 'Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results.', 'Part of the gap between resident and virtual memory is due to the time at which data was collected.', 'Statistics are printed before Moses exits and after parts of the decoder have been destroyed.', 'Moses keeps language models and many other resources in static variables, so these are still resident in memory.', 'Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.', 'Overall, language modeling significantly impacts decoder performance.', 'In line with perplexity results from Table 1, the PROBING model is the fastest followed by TRIE, and subsequently other packages.', 'We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.', 'The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default.', 'IRST is not threadsafe.', 'Time for Moses itself to load, including loading the language model and phrase table, is included.', 'Along with locking and background kernel operations such as prefaulting, this explains why wall time is not one-eighth that of the single-threaded case. aLossy compression with the same weights. bLossy compression with retuned weights. the non-lossy options.', 'For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage.', 'This is most severe with RandLM in the multi-threaded case, where each thread keeps a separate cache, exceeding the original model size.', 'As noted for the perplexity task, we do not expect cache to grow substantially with model size, so RandLM remains a low-memory option.', 'Caching for IRSTLM is smaller at 0.09 GB resident memory, though it supports only a single thread.', 'The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work.', 'RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs.', 'It also does not prune, so comparing to our pruned model would be unfair.', 'Using RandLM and the documented settings (8-bit values and 1 256 false-positive probability), we built a stupid backoff model on the same data as in Section 5.2.', 'We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces.', 'Table 4 shows the results.', 'We elected run Moses single-threaded to minimize the impact of RandLM’s cache on memory use.', 'RandLM is the clear winner in RAM utilization, but is also slower and lower quality.', 'However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.', 'There any many techniques for improving language model speed and reducing memory consumption.', 'For speed, we plan to implement the direct-mapped cache from BerkeleyLM.', 'Much could be done to further reduce memory consumption.', 'Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially.', 'Quantization can be improved by jointly encoding probability and backoff.', 'For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.', 'Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).', 'Finally, other packages implement language model estimation while we are currently dependent on them to generate an ARPA file.', 'While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state.', 'For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.', 'If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state.', 'This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.', 'Exposing this information to the decoder will lead to better hypothesis recombination.', 'Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension.', 'This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.', 'We have described two data structures for language modeling that achieve substantial reductions in time and memory cost.', 'The PROBING model is 2.4 times as fast as the fastest alternative, SRILM, and uses less memory too.', 'The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.', 'These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua.', 'We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns.', 'The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration.', 'Alon Lavie advised on this work.', 'Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow.', 'Adam Pauls provided a pre-release comparison to BerkeleyLM and an initial Java interface.', 'Nicola Bertoldi and Marcello Federico assisted with IRSTLM.', 'Chris Dyer integrated the code into cdec.', 'Juri Ganitkevitch answered questions about Joshua.', 'This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No.', '0750271 and by the DARPA GALE program.']",extractive -P11-1061_swastika,P11-1061,4,4,"Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.","Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.","['Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections', 'We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.', 'Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.', 'We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).', 'Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.', 'Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.', 'Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).', 'However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.', 'Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.', 'Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best.', 'To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.', 'This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).', 'Naseem et al. (2009) and Snyder et al.', '(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.', 'Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.', 'First, we use a novel graph-based framework for projecting syntactic information across language boundaries.', 'To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4).', 'Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.', 'To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).', 'Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.', 'Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.', 'These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.', 'We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements.', 'Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).', 'The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.', 'Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.', 'As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.', 'Graph construction does not require any labeled data, but makes use of two similarity functions.', 'The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).', 'To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.', 'To initialize the graph we tag the English side of the parallel text using a supervised model.', 'By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices.', 'Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).', 'The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).', 'The following three sections elaborate these different stages is more detail.', 'In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003).', 'Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.', 'Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.', 'More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.', 'They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger.', 'We extend Subramanya et al.’s intuitions to our bilingual setup.', 'Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language.', 'The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).', 'On the English side, however, the vertices (denoted by Ve) correspond to word types.', 'Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams.', 'Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training.', 'We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.', 'Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).', 'We briefly review it here for completeness.', 'We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1.', 'Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.', 'For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.', 'This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.', 'Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.', 'Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.', 'We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.', 'To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.', 'Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.', 'Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts.', 'So far the graph has been completely unlabeled.', 'To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.', 'These tag distributions are used to initialize the label distributions over the English vertices in the graph.', 'Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.', 'A very small excerpt from an Italian-English graph is shown in Figure 1.', 'As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.', 'In this particular case, all English vertices are labeled as nouns by the supervised tagger.', 'In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.', 'It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.', 'In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.', 'Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.', 'Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.', 'We use label propagation in two stages to generate soft labels on all the vertices in the graph.', 'In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.', 'Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.', 'This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.', 'We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.', 'It can be shown that this objective is convex in q.', 'The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar.', 'The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).', 'If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.', 'While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.', 'Instead, we resort to an iterative update based method.', 'We formulate the update as follows: where ∀ui ∈ Vf \\ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.', 'We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.', 'This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.', 'We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).', 'For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.', 'This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.', 'In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials.', 'The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.', 'All features were conjoined with the state z.', 'We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.', 'To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).', 'For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.', 'We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model.', 'This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.', 'The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.', 'This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.', '7).', 'It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.', 'However, we do not explore this possibility in the current work.', 'Before presenting our results, we describe the datasets that we used, as well as two baselines.', 'We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side.', 'The availability of these resources guided our selection of foreign languages.', 'For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).', 'The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).', 'Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.', 'Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.', 'However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.', 'We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.', 'We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.', 'We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).', 'While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.', 'For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.', 'The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.', 'The taggers were trained on datasets labeled with the universal tags.', 'The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.', 'In other words, the set of hidden states F was chosen to be the fine set of treebank tags.', 'Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.', 'To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.', 'We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.', 'For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.', 'For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM.', 'This can be seen as a rough approximation of Yarowsky and Ngai (2001).', 'We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq.', '1).', 'Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.', 'Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.', 'Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.', 'Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages.', 'We used C = 1.0 as the L2 regularization constant in (Eq.', '10) and trained both EM and L-BFGS for 1000 iterations.', 'When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.', '7).', 'Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3.', 'For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.', 'For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.', 'The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.', 'Table 2 shows our complete set of results.', 'As expected, the vanilla HMM trained with EM performs the worst.', 'The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).', 'Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.', 'The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.', 'Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.', 'For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.', 'Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages.', 'It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.', 'As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages.', 'Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features.', 'We tabulate this increase in Table 3.', 'For all languages, the vocabulary sizes increase by several thousand words.', 'Although the tag distributions of the foreign words (Eq.', '6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.', 'Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.', 'While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example.', 'Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.', 'As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.', 'As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.', 'We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.', 'Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.', 'Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.', 'Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.', 'We would like to thank Ryan McDonald for numerous discussions on this topic.', 'We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.', 'Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.']",extractive -W06-3114_swastika,W06-3114,6,176,Human judges also pointed out difficulties with the evaluation of long sentences.,Human judges also pointed out difficulties with the evaluation of long sentences.,"['Manual and Automatic Evaluation of Machine Translation between European Languages', 'Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .', '0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt', 'was done by the participants.', 'This revealed interesting clues about the properties of automatic and manual scoring.', '• We evaluated translation from English, in addition to into English.', 'English was again paired with German, French, and Spanish.', 'We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.', 'The evaluation framework for the shared task is similar to the one used in last year’s shared task.', 'Training and testing is based on the Europarl corpus.', 'Figure 1 provides some statistics about this corpus.', 'To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.', 'To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.', 'We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.', 'There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.', 'Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.', 'The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.', 'Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.', 'In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.', 'We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.', 'For statistics on this test set, refer to Figure 1.', 'The out-of-domain test set differs from the Europarl data in various ways.', 'The text type are editorials instead of speech transcripts.', 'The domain is general politics, economics and science.', 'However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.', 'We received submissions from 14 groups from 11 institutions, as listed in Figure 2.', 'Most of these groups follow a phrase-based statistical approach to machine translation.', 'Microsoft’s approach uses dependency trees, others use hierarchical phrase models.', 'Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.', 'About half of the participants of last year’s shared task participated again.', 'The other half was replaced by other participants, so we ended up with roughly the same number.', 'Compared to last year’s shared task, the participants represent more long-term research efforts.', 'This may be the sign of a maturing research environment.', 'While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible.', 'For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.', 'For the automatic evaluation, we used BLEU, since it is the most established metric in the field.', 'The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.', 'It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.', 'The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).', 'However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.', 'They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.', 'The development of automatic scoring methods is an open field of research.', 'It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.', 'At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.', 'We computed BLEU scores for each submission with a single reference translation.', 'For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.', 'By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.', 'Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.', 'Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.', 'Hence, we use the bootstrap resampling method described by Koehn (2004).', 'Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.', 'When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.', 'Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.', 'If two systems’ scores are close, this may simply be a random effect in the test data.', 'To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.', 'If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.', 'The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.', 'We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.', 'We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.', 'The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.', 'Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?', 'We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems.', 'While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.', 'Many human evaluation metrics have been proposed.', 'Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.', 'The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.', 'In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.', 'Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.', 'We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.', 'We settled on contrastive evaluations of 5 system outputs for a single test sentence.', 'See Figure 3 for a screenshot of the evaluation tool.', 'Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.', 'The judgements tend to be done more in form of a ranking of the different systems.', 'We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.', 'While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.', 'Our initial experimentation with the evaluation tool showed that this is often too overwhelming.', 'Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.', 'Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.', 'Judges where excluded from assessing the quality of MT systems that were submitted by their institution.', 'Sentences and systems were randomly selected and randomly shuffled for presentation.', 'We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.', 'This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.', 'This decreases the statistical significance of our results compared to those studies.', 'The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.', 'The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:', 'Judges varied in the average score they handed out.', 'The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.', 'Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.', 'The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).', 'In words, the judgements are normalized, so that the average normalized judgement per judge is 3.', 'Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.', 'In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.', 'The way judgements are collected, human judges tend to use the scores to rank systems against each other.', 'If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.', 'On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.', 'The judgement of 4 in the first case will go to a vastly better system output than in the second case.', 'We therefore also normalized judgements on a per-sentence basis.', 'The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).', 'Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.', 'Systems that generally do worse than others will receive a negative one.', 'One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.', 'Given the limited number of judgements we received, we did not try to evaluate this.', 'Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.', 'Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.', 'Unfortunately, we have much less data to work with than with the automatic scores.', 'The way we cant distinction between system performance.', 'Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).', 'Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.', 'The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.', 'The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.', 'In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.', 'In all figures, we present the per-sentence normalized judgements.', 'The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.', 'The confidence intervals are computed by bootstrap resampling for BLEU, and by standard significance testing for the manual scores, as described earlier in the paper.', 'Pairwise comparison is done using the sign test.', 'Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.', 'This actually happens quite frequently (more below), so that the rankings are broad estimates.', 'For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.', 'At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.', 'There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.', 'In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.', 'For the automatic scoring method BLEU, we can distinguish three quarters of the systems.', 'While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.', 'For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.', 'More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.', 'We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.', 'The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.', 'Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.', 'So, this was a surprise element due to practical reasons, not malice.', 'All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data.', 'This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5.', 'The manual scores are averages over the raw unnormalized scores.', 'It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.', 'Different sentence structure and rich target language morphology are two reasons for this.', 'Again, we can compute average scores for all systems for the different language pairs (Figure 6).', 'The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.', 'The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).', 'This is because different judges focused on different language pairs.', 'Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.', 'Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU.', 'We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.', 'In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores.', 'Surprisingly, this effect is much less obvious for out-of-domain test data.', 'For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.', 'Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.', 'This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.', 'This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.', 'So, who won the competition?', 'The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.', 'This is not completely surprising, since all systems use very similar technology.', 'For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.', 'The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.', 'The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.', 'This is the first time that we organized a large-scale manual evaluation.', 'While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.', 'For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations.', 'Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.', 'Almost all annotators expressed their preference to move to a ranking-based evaluation in the future.', 'A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?', '(b) does the translation have the same meaning, including connotations?', 'Annotators suggested that long sentences are almost impossible to judge.', 'Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.', 'A few annotators suggested to break up long sentences into clauses and evaluate these separately.', 'Not every annotator was fluent in both the source and the target language.', 'While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.', 'However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).', 'Lack of correct reference translations was pointed out as a short-coming of our evaluation.', 'One annotator suggested that this was the case for as much as 10% of our test sentences.', 'Annotators argued for the importance of having correct and even multiple references.', 'It was also proposed to allow annotators to skip sentences that they are unable to judge.', 'We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.', 'While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.', 'Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.', 'The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.', 'The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.', 'Replacing this with an ranked evaluation seems to be more suitable.', 'Human judges also pointed out difficulties with the evaluation of long sentences.', 'This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No.', 'HR0011-06-C-0022.']",extractive -W99-0623_vardha,W99-0623,4,51,"One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","['Exploiting Diversity in Natural Language Processing: Combining Parsers', 'Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy.', 'Two general approaches are presented and two combination techniques are described for each approach.', 'Both parametric and non-parametric models are explored.', 'The resulting parsers surpass the best previously published performance results for the Penn Treebank.', 'The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.', 'The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).', 'Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.', 'The theory has also been validated empirically.', 'Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).', 'In both cases the investigators were able to achieve significant improvements over the previous best tagging results.', 'Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).', 'The corpus-based statistical parsing community has many fast and accurate automated parsing systems, including systems produced by Collins (1997), Charniak (1997) and Ratnaparkhi (1997).', 'These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993).', 'We used these three parsers to explore parser combination techniques.', 'We are interested in combining the substructures of the input parses to produce a better parse.', 'We call this approach parse hybridization.', 'The substructures that are unanimously hypothesized by the parsers should be preserved after combination, and the combination technique should not foolishly create substructures for which there is no supporting evidence.', 'These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.', 'The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.', 'Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination.', ""One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set."", 'If enough parsers suggest that a particular constituent belongs in the parse, we include it.', 'We call this technique constituent voting.', 'We include a constituent in our hypothesized parse if it appears in the output of a majority of the parsers.', 'In our particular case the majority requires the agreement of only two parsers because we have only three.', 'This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.', 'Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.', 'The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes.', 'Our original hope in combining these parsers is that their errors are independently distributed.', 'This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.', 'For this reason, naïve Bayes classifiers are well-matched to this problem.', 'In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.', 'C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.', 'Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.', 'The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.', 'The estimation of the probabilities in the model is carried out as shown in Equation 4.', 'Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument.', 'Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.', 'There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set.', 'Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents.', 'IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.', 'Call the crossing constituents A and B.', 'A receives a votes, and B receives b votes.', 'Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.', 'Let s = a + b.', 'None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents.', 'Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction.', '• Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis.', 'In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.', 'There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses.', 'One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.', 'This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences.', 'For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.', 'If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).', 'We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important.', 'The combining algorithm is presented with the candidate parses and asked to choose which one is best.', 'The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence.', 'We call this approach parser switching.', 'Once again we present both a non-parametric and a parametric technique for this task.', 'First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.', 'We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.', 'This is the parse that is closest to the centroid of the observed parses under the similarity metric.', 'The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers.', 'Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.', 'We model each parse as the decisions made to create it, and model those decisions as independent events.', 'Each decision determines the inclusion or exclusion of a candidate constituent.', 'The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.', 'This is summarized in Equation 5.', 'The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.', ""In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers."", 'It is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsers.', 'The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.', 'We used section 23 as the development set for our combining techniques, and section 22 only for final testing.', 'The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.', ""A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'"", 'The standard measures for evaluating Penn Treebank parsing performance are precision and recall of the predicted constituents.', 'Each parse is converted into a set of constituents represented as a tuples: (label, start, end).', 'The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall.', 'Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.', 'For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.', 'F-measure is the harmonic mean of precision and recall, 2PR/(P + R).', 'It is closer to the smaller value of precision and recall when there is a large skew in their values.', 'We performed three experiments to evaluate our techniques.', 'The first shows how constituent features and context do not help in deciding which parser to trust.', 'We then show that the combining techniques presented above give better parsing accuracy than any of the individual parsers.', 'Finally we show the combining techniques degrade very little when a poor parser is added to the set.', 'It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.', 'For example, one parser could be more accurate at predicting noun phrases than the other parsers.', 'None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference.', 'This is not an oversight.', 'Features and context were initially introduced into the models, but they refused to offer any gains in performance.', 'While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain.', 'Because we are working with only three parsers, the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itself.', 'This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.', 'One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.', 'We call such a constituent an isolated constituent.', 'If we were working with more than three parsers we could investigate minority constituents, those constituents that are suggested by at least one parser, but which the majority of the parsers do not suggest.', 'Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall.', 'Consider for a set of constituents the isolated constituent precision parser metric, the portion of isolated constituents that are correctly hypothesized.', ""When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse."", 'We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes.', 'In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature.', 'The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.', 'In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible.', 'Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.', 'In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents.', 'Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.', 'From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.', 'The results in Table 2 were achieved on the development set.', 'The first two rows of the table are baselines.', 'The first row represents the average accuracy of the three parsers we combine.', ""The second row is the accuracy of the best of the three parsers.'"", 'The next two rows are results of oracle experiments.', 'The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework.', 'It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.', 'The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.', 'Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers.', 'The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization.', 'We do not show the numbers for the Bayes models in Table 2 because the parameters involved were established using this set.', 'The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.', 'Table 3 contains the results for evaluating our systems on the test set (section 22).', 'All of these systems were run on data that was not seen during their development.', 'The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.', 'This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.', 'The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.', 'Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.', 'Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time.', 'Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.', 'In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.', 'The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.', 'It was then tested on section 22 of the Treebank in conjunction with the other parsers.', 'The results of this experiment can be seen in Table 5.', 'The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser.', 'As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.', 'The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.', 'The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.', 'We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.', 'Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.', 'We have presented two general approaches to studying parser combination: parser switching and parse hybridization.', 'For each experiment we gave an nonparametric and a parametric technique for combining parsers.', 'All four of the techniques studied result in parsing systems that perform better than any previously reported.', 'Both of the switching techniques, as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experiments.', 'Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.', 'Combining multiple highly-accurate independent parsers yields promising results.', 'We plan to explore more powerful techniques for exploiting the diversity of parsing methods.', 'We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments.', 'This work was funded by NSF grant IRI-9502312.', 'Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University.']",extractive -W99-0613_vardha,W99-0613,3,4,Here we present two algorithms.,We present two algorithms.,"['Unsupervised Models for Named Entity Classification Collins', 'This paper discusses the use of unlabeled examples for the problem of named entity classification.', 'A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.', 'The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type.', 'We present two algorithms.', 'The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).', 'The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).', 'Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.', 'Recent results (e.g., (Yarowsky 95; Brill 95; Blum and Mitchell 98)) have suggested that unlabeled data can be used quite profitably in reducing the need for supervision.', 'This paper discusses the use of unlabeled examples for the problem of named entity classification.', 'The task is to learn a function from an input string (proper name) to its type, which we will assume to be one of the categories Person, Organization, or Location.', 'For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.', 'The approach uses both spelling and contextual rules.', 'A spelling rule might be a simple look-up for the string (e.g., a rule that Honduras is a location) or a rule that looks at words within a string (e.g., a rule that any string containing Mr. is a person).', 'A contextual rule considers words surrounding the string in the sentence in which it appears (e.g., a rule that any proper name modified by an appositive whose head is president is a person).', 'The task can be considered to be one component of the MUC (MUC-6, 1995) named entity task (the other task is that of segmentation, i.e., pulling possible people, places and locations from text before sending them to the classifier).', 'Supervised methods have been applied quite successfully to the full MUC named-entity task (Bikel et al. 97).', 'At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier.', 'But we will show that the use of unlabeled data can drastically reduce the need for supervision.', 'Given around 90,000 unlabeled examples, the methods described in this paper classify names with over 91% accuracy.', 'The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).', 'The key to the methods we describe is redundancy in the unlabeled data.', 'In many cases, inspection of either the spelling or context alone is sufficient to classify an example.', 'For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person.', 'Even if an example like this is not labeled, it can be interpreted as a "hint" that Mr and president imply the same category.', 'The unlabeled data gives many such "hints" that two features should predict the same label, and these hints turn out to be surprisingly useful when building a classifier.', 'We present two algorithms.', 'The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).', '(Yarowsky 95) describes an algorithm for word-sense disambiguation that exploits redundancy in contextual features, and gives impressive performance.', ""Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function."", '(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.', ""Our first algorithm is similar to Yarowsky's, but with some important modifications motivated by (Blum and Mitchell 98)."", 'The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.', 'The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98).', 'The AdaBoost algorithm was developed for supervised learning.', 'AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.', 'Roughly speaking, the new algorithm presented in this paper performs a similar search, but instead minimizes a bound on the number of (unlabeled) examples on which two classifiers disagree.', 'The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.', 'There has been additional recent work on inducing lexicons or other knowledge sources from large corpora.', '(Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples.', '(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., "speedometer" from "car") from a large corpus using hand-crafted patterns.', '(Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations).', '(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).', 'The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95).', 'More recently, (Riloff and Jones 99) describe a method they term "mutual bootstrapping" for simultaneously constructing a lexicon and contextual extraction patterns.', 'The method shares some characteristics of the decision list algorithm presented in this paper.', '(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.', '971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN).', 'For example, take ..., says Maury Cooper, a vice president at S.&P.', 'In this case, Maury Cooper is extracted.', 'It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).', '2.', 'The NP is a complement to a preposition, which is the head of a PP.', 'This PP modifies another NP, whose head is a singular noun.', 'For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant.', 'In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted.', 'In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example).', 'From here on we will refer to the named-entity string itself as the spelling of the entity, and the contextual predicate as the context.', 'Having found (spelling, context) pairs in the parsed data, a number of features are extracted.', 'The features are used to represent each example for the learning algorithm.', 'In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone.', 'The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period.', '(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.', 'In this case nonalpha is the string formed by removing all upper/lower case letters from the spelling (e.g., for Thomas E. Petry nonalpha= .', ', for A. T.&T. nonalpha.. .', '.', '). context=x The context for the entity.', 'The', 'The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).', 'Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .', '.', '.', 'Xim, } associated with the ith example.', 'Each xii is a member of X, where X is a set of possible features.', 'Output of the learning algorithm: a function h:Xxy [0, 1] where h(x, y) is an estimate of the conditional probability p(y1x) of seeing label y given that feature x is present.', 'Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).', 'The label for a test example with features x is then defined as In this paper we define h(x, y) as the following function of counts seen in training data: Count(x,y) is the number of times feature x is seen with label y in training data, Count(x) = EyEy Count(x, y). a is a smoothing parameter, and k is the number of possible labels.', 'In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.', 'Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx).', '2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)).', 'The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods.', ""It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules."", 'In the named entity domain these rules were Each of these rules was given a strength of 0.9999.', ""The following algorithm was then used to induce new rules: Let Count' (x) be the number of times feature x is seen with some known label in the training data."", ""For each label (Per s on, organization and Location), take the n contextual rules with the highest value of Count' (x) whose unsmoothed3 strength is above some threshold pmin."", '(If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper.', 'Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3.', 'Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.', 'We can now compare this algorithm to that of (Yarowsky 95).', ""The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2."", 'Set the decision list to include all rules whose (smoothed) strength is above some threshold Pmin.', 'There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.', 'Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm.', 'To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.', 'Yarowsky-cautious does not separate the spelling and contextual features, but does have a limit on the number of rules added at each stage.', '(Specifically, the limit n starts at 5 and increases by 5 at each iteration.)', 'The first modification — cautiousness — is a relatively minor change.', 'It was motivated by the observation that the (Yarowsky 95) algorithm added a very large number of rules in the first few iterations.', 'Taking only the highest frequency rules is much "safer", as they tend to be very accurate.', 'This intuition is born out by the experimental results.', 'The second modification is more important, and is discussed in the next section.', 'An important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples.', '(Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example.', 'In the named entity task, X1 might be the instance space for the spelling features, X2 might be the instance space for the contextual features.', 'By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.', 'Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.', 'Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled.', 'In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz.', 'In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.', 'The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.', '(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.', 'Consider the case where IX].', 'I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples.', 'Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2.', 'The problem is a binary classification problem.', 'The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2.', 'Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.', 'An edge indicates that the two features must have the same label.', 'Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.', 'Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label.', '(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.', 'They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).', 'The method halves the error rate in comparison to a method using the labeled examples alone.', 'Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited.', 'In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.', 'It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners.', ""The question of what soft function to pick, and how to design' algorithms which optimize it, is an open question, but appears to be a promising way of looking at the problem."", 'The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.', 'At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.', 'Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases.', 'So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.', 'In the next section we present an alternative approach that builds two classifiers while attempting to satisfy the above constraints as much as possible.', 'The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m.', 'Fort= 1,...,T:', 'This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.', 'We first give a brief overview of boosting algorithms.', 'We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification.', 'The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.', ""(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)"", 'This section describes AdaBoost, which is the basis for the CoBoost algorithm.', 'AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.', 'For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume.', 'The input to AdaBoost is a set of training examples ((xi , yi), , (x„.„ yrn)).', 'Each xt E 2x is the set of features constituting the ith example.', 'For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.', 'AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances.', 'The distribution specifies the relative weight, or importance, of each example — typically, the weak learner will attempt to minimize the weighted error on the training set, where the distribution specifies the weights.', 'The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.', 'The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.', 'The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.', 'Pseudo-code describing the generalized boosting algorithm of Schapire and Singer is given in Figure 1.', 'Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.', 'The normalization factor plays an important role in the AdaBoost algorithm.', 'Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z.', 'In our implementation, we make perhaps the simplest choice of weak hypothesis.', 'Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.', 'Our derivation is slightly different from the one presented in (Schapire and Singer 98) as we restrict at to be positive.', 'Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ.', '(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.', 'To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.', '(5) and ht into Equ.', '(4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ.', '(6), with W+ > W_.', 'We now describe the CoBoost algorithm for the named entity problem.', 'Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.', 'In the namedentity problem each example is a (spelling,context) pair.', 'The first m pairs have labels yi, whereas for i = m + 1, , n the pairs are unlabeled.', 'We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.', 'The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.', '(3)) to be defined over unlabeled as well as labeled instances.', 'Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).', 'We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.', 'To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.', '(3)), with one term for each classifier.', 'The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples.', 'Put another way, the minimum of Equ.', '(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.', 'Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree.', 'Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.', 'The algorithm builds two classifiers in parallel from labeled and unlabeled data.', 'As in boosting, the algorithm works in rounds.', 'Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.', 'Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.', 'We first define "pseudo-labels",-yt, as follows: = Yi t sign(g 0\\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.', 'We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1